Fog Creek Software
Discussion Board

"Lord Palmerston...", Specialist v. Generalist

Great article. The more Joel has to say, and the more shit he talks (esp. re: language bigots), the more I like it.

However, as a programmer with 2+ years' experience under my belt, I have three (groups of) questions for the more experienced programmers out there:

1) Beside the "oh, every programmer should know about Assembly and C or else you don't know sh--" snobbery, does anyone use that knowledge day-to-day any more?

I am trying to teach myself some rudiementary C, by creating a command-line card game from scratch. My rationale is that it will give me a different perspective on programming issues, and there is a ton of literature and existing C code that I can study. Same with Lisp. But, in 2003 (almost), is it a reasonable expectation that I'll actually write any Assembly (for example)? Ever? What about C?

2) How relevant is knowing the MFC API or Win95 internals to something new, like C# or WinXP? To what extent is everyone starting from scratch, when a new technology comes out?

Is this the way to go (i.e. being an early-adopter of something, rather than the millionth programmer to try his hand at C++ or Java)?

On the other hand, is it a waste of my time to learn one of these subjects, because I'll never catch up with, say, the people who have already done Java for 6 years, and had some OOP background before that?

3) Is it really a smart career more to "put all your eggs in one basked" like that? Have people experienced any real downside to pigeonholing yourself as "strictly a Java guy", or is that far outweighed by being a specialist?


Joe Grossberg
Wednesday, December 11, 2002

i don't know where you are located, but what I have learned in my career in IT is that in the USA, you can make a living doing ANYTHING YOU WANT, so long as you know how to "make a living."  Thinking, "will there be a market for assembler in 10 years" is dumb. If you want to write low level stuff, just figure it out, get REALLY GOOD at it, then figure out how to SELL YOURSELF. If you want to be a wedding photographer, figure it out, get REALLY GOOD at it, and SELL YOURSELF. 

I thought like you did, at one point. And I kept worrying about "do I need to learn Java/.Net/Foo in order to stay employed doing stuff I really could care less about?"  Then I decided I just wanted to do audio applications, so I spent about a year getting really good at what is necessary to do that (cross platform C++, some low level knowledge, some hardware knowledge, some DSP knowledge), made some contacts, and then started selling myself around. Now I only do audio apps, am having fun, and making about double the amount of money I was making before. 

This will get boring after a while, so my next career move will involve actually doing audio and video production, a field which has way more wannabes than software. But, I feel confident enough that I can now sell myself well enough so that I can be successful financially doing it.

so the summary is, just learn what you want to learn. what you think will be fun and interesting. the more important skill to staying financially afloat is how to sell yourself.

Wednesday, December 11, 2002

My take (As a programmer with coming up 12 years experience )

>1) But, in 2003 (almost), is it a reasonable expectation >that I'll actually write any Assembly (for example)? Ever? >What about C?

I don't think that you will likely do Assmbly unless you get to certain areas like embedded systems, games

C is always useful. There is so much code out there and so many libraries in use. In almost all cases you will find something in your language/tool of choice that it cannot do. You will then need to go out and purchase a tool or product to do it, or do it yourself in C/C++.

I think that it is useful to know what is actually done under the hood but it is certainly not essential.

>2) How relevant is knowing the MFC API or Win95 >internals to something new, like C# or WinXP? To what >extent is everyone starting from scratch, when a new >technology comes out?

C# and WinXP are built on top of things like MFC API and Win95. Sure some things change but many many things stay the same.

Also there are techniques or patterns in these tools that transcend languages. The MVC architecture used by MFC is a useful construct for any sort of data-based GUI application.

>Is this the way to go (i.e. being an early-adopter of >something, rather than the millionth programmer to try >his hand at C++ or Java)?

Actually knowing an older technology very well helps you swing into the new technologies quicker. My C++ background helped me get off the ground much faster in Java and ( I assume) will help me get into C# better.

Also some of the features of more established languages tend to move into newer languages in later releases. For example C++ templates are slated for the next release of Java. Understanding the older languages and their concepts may help you understand were languages like C# and Java are going.

>3) Is it really a smart career more to "put all your eggs in >one basked" like that? Have people experienced any real >downside to pigeonholing yourself as "strictly a Java >guy", or is that far outweighed by being a specialist?

Languages like Java, VB, C/C++, C# are not going to go away in a hurry, you will get pleanty of notice.  I do not think that there is any problem specializing in these languages. In fact you may have trouble if you do not specialize as you can get the "Jack of all trades, master of none" syndrome. You would need to be a little more carful with some lesser used tools, i.e. if you were a Lotus Notes Developer, Sybase DBA, etc.

I think that the more important thing is to specialise in your problem domain. In other words what businesses are you going to write software for e.g. Financial companies, Telecommunications,  . Get a good basic working knowledge of that environment, you don't need to become an expert, but know the basic technologies, processes, terms, issues and problems of that busibess sector).

Wednesday, December 11, 2002

Ok, I'l admit I have shied away from the Windows environment for a long time and spent most of my life in the RTOS world. I will also admit that I have my own personal list of things I hate about certain MS products. However, I live in the real world and have to solve real problems so I just get on with things.

You recent article on abstraction layering and the amount you have to know today to be considered proficient strikes a strong chord with me. You comments about the Linux/UNIX world are spot on.

As part of my job I have to build lots of software from scratch. So I need build scripts that work. Linux is pretty good when it comes to self-hosted development work; but I'm an embedded systems developer and I need to build cross-compilation systems (work on x86 but target ARM). And guess how many of those build scripts work right out of the bag ... damn few, I can tell you.

I spent all of yesterday trying to build a cross-development toolchain for ARM. I would use a pre-built chain if I could but I cannot due to the needs of my customer who wants to be able to build things from scratch. Its taken some effort, but I've managed.

I tend to find that Linux-only developers have as blinkered a view of reality at Windows-only (or subsititue-your-favourite-os-only)  developers. Yes, there are some great tools out there but customizing them for your own needs can be awfully fraught at times. Its not the OS that matters, its the blinkers you wear when using it.

I like Linux. I don't love it. Its got a lot of pluses but quite a few minuses as well. Hell, its yet another OS I've had to pick up since I started full-time work in 1989. I suspect another few lie between now and my eventual retirement.

Will my cross-devlopment woes be eased in the near future? I doubt it. But, with my knowledge and some hard work, I'll succeed. You need to take a pragmatic approach in this business, or else you'll never get anything done!

I'm a pragmatic programmer.

Anyway, Joel, a sugggestion for your cross-platform worries. What about taking a look at QT from Trolltech. A pretty neat cross-platform GUI which avoids the lowest common denominator approach. I've started using it for a cross-platform product (Linux and Windows). It presents a good graphical API, comes with a neat GUI designer (which I have yet to use) and has an effective way of hooking GUI operations from the user into your application code. If you take a look you might be pleasantly surprised.

Gordon J Milne
Wednesday, December 11, 2002

I've found that if you move from the basics of computer programming (Assembly, Disk I/O, C, Hardware) outward to the 'simplified' subjects (Java, C#, Visual Basic) you are better equipped to tackle obscure issues and strange situations. If you start out in the middle, like most programmers, I suggest you work both up into the new technologies and down into the old ones. Each step in either direction will give your a deeper understanding of computer systems as a whole, which can then be applied to the work you are doing. The end result is more sophisticated, robust, and efficient projects.

As far as learning goes, I find that the more languages you know, the easier it is to pick up new ones. Each language learned contributes not just to your base of applicable knowledge, but towards your skills with learning other languages. Eventually, you come to understand that all computer languages are inherently the same, as they all derive from machine language and hardware. Once you reach this key understanding, its not uncommon to become fluent with a new language in a few weeks. After all, its just like any language: syntax and grammar surrounding the same meanings.

Now if you're looking to apply what you know: C is still used everywhere. Assembler is used if you are looking at doing cool stuff like compilers, video games, or operating systems. Assembler is also useful in C, as an element of debugging. Its definitely a skill I recommend having, if only in the sense that you know the basics of register manipulation.

There's my rambling two bits. You'll have to excuse me. I'm trying to give up coffee. :)

Dustin Alexander
Wednesday, December 11, 2002

Mr. Joe,

At first sight this is a bunch of very relevant questions. Im one of those "snobs" that learned assembly language by my 14th birthday.

To tell you the truth, assembly language

1) has never got me laid.
2) is not very lucrative today unless you learn OS/390 asm
3) is useless unless you are writing embedded software
4) very satisfactory when you get your seemingly lameass assembler stuff working. Controlling LEDs using bit flipping of the paralell port is cool. Reading switches from the same port is way cool. See 1).

>To what extent is everyone starting from scratch, when a
>new technology comes out?

C is really a plattform-independent assembler. C is not a high level language. The more understanding you have of the basic concepts of computing the less is new. New languages seems to be just a higher levels of abstraction. The more you know on a low level the better.

>already done Java for 6 years

Java,  C#, Delphi, same stuff...OO languages. We tend to argue if begin ... end; or { do_something() } is the way to go. Under the covers its pretty much the same.

If you the the hang of OO you its just a matter of more or less learning a new syntax. My low level understanding of assembly language have helped me understand how OO languages handle things.

If you have a low-level understanding of how things "should" work, you can transfer that to a new language. "I know it works this way in FOO++,  lets try it in BAR-- )....I probably will get yelled at for saying that, since most languages handle pointers differently. Learn pointers in assembly language, and then again, its a matter of syntax.

>Is it really a smart career more to "put all your eggs in
>one basked" like that?

Learning stuff on a lower level makes you more "moveable"
between different languages/plattforms. In my oppinion you are diversing your eggs between multiple baskets getting a low level understanding of things.

Flames are welcme ;-)

Wednesday, December 11, 2002

One morepost, slightly more related to the question than my original: I agree with the folks who say you need a solid grounding in something low level, like C and assembler. It is a lot easier to understand the higher level abstractions than it is to understand the low level details. Thus it is easier for a hardcore C guy to start doing java apps than it is for someone who has only done web applications to start writing device drivers.

I do think Joel sort of overstates the amount of experience necessary to successfully complete a project using a new technology. Even if you have to learn a new, heinous API, if you know the basics, the ramp up time should be measured in months, not years. Medical residents start chopping up patients after 2 semesters worth of gross anatomy, not after 20 years of gross anatomy.

Wednesday, December 11, 2002


Medical residents with  2 semesters worth of gross anatomy can only cut up people if they have someone else on hand with 20 years of gross anatomy there in case the anatomy gets too gross.

It is kind of the same thing with programming, you do need to have access to someone who knows the stuff fairly intimately. However I do agree with you that the expert does not need to be a 10 year veteran, a few years should suffice, especially since there is often not many people out there.

Wednesday, December 11, 2002

In response to the comments of "I know assembler, and you'll never use it."
You will almost certainly never write assembly in your career.  However, Joel's point was that such knowledge is useful when things go wrong.  It turns out, much to my surprise after graduating from college, that the "go to dis-assembly" menu item in MSVC helps you fix some otherwise extremely difficult bugs.  If you don't know anything about computer architecture (calling conventions, how memory is laid out, etc.) and at least a little of the assembly for your platform, this menu item seems useless.

J. Brian Smith
Wednesday, December 11, 2002

The people I know that are highly specialized command a much higher salary than the average generalist. However, they have a harder time finding a job - but I think it generally works out in favor of being a specialist.

The tricky part is stringing together a series of jobs and projects early in your career which allow you to develop your specialty. A former college classmate and current co-worker who graduated with me has had a much more diverse few years of experience compared to me. He sees this as a disadvantage in the job market and I would have to agree with him.

Wednesday, December 11, 2002

>This is the most off-topic post there is ;-)

>All computers suck - they just suck in different ways.

  let me elaborate on the point - everything that the Linux/Windows/Mac camps ignores because they are familiar with it, is often the part that people say 'sucks' about that OS (or is non-intuitive).

  As an example:

  Windows - lack of logging, quaint things like having to remove a device to fix its installation, ability of
the operating system to be corrupted (in earlier incantations) by viruses, etc.

  Linux - various strangeness in drivers, high learning curve for things like the shells, etc. 

  Mac OS (pre-10) init conflicts, wildly different API from other OSes, etc.

  In other words, all OSes have their warts that their partisans chose to ignore - which is pretty much the point made by the article.

Dave Ritchie
Wednesday, December 11, 2002

Since I write business apps for a living, I let the market make my technology choices for me.

As Joel mentioned in his article, "Becoming proficient, really proficient, in just one programming world takes years".  Also, most employers seem reluctanct to hire generalists for new development work.

Right now I am trying to learn as much as I can about Microsoft .NET simply because one of my specialites (VB application development) is going away soon and I don't have prior paid development experience on the Unix platform (Java, Perl, etc.).

one programmer's opinion
Thursday, December 12, 2002


1) It depends what you do. If you are developing and then optimising code I would say you have to know at least assembly language, but also preferably the architecture of whatever the target is.

2) Nobody knows yet :-) (though probably knowing Win95 internals is not so useful any more)

3) If you pick the right basket, sure (see 2!). Actually even if you don't pick a basket which is full, even if it is nearly empty, you will likely still make a good living due to the laws of supply and demand.

Thursday, December 12, 2002

I've been a generalist that specialises for the past 20 odd years.

Its true people are more comfortable with specialists because they think they're up to speed faster.  However, a generalist learns more short cuts and is generally more flexible than someone who specialises.

The result, the final question which UI engine to use in a cross platform product?  I had to come up with an answer to it.  A generalist can do that, a specialist has to generalise in order to do it.

A generalist has no sense of something being below their talent or knowledge to do.  You want me to run a process I designed for you to use but you're too lazy to pay attention to it (or more likely your core business takes all your time)?

Sure I'll do that.

You want me to design and develop a private file space for an application so you can  run it embedded on any OS?

Sure I'll do that.

There used to be a 'Cheech and Chong' double act of Program Managers at MS for Foxpro, when it was just becoming Visual.  They had a catchphrase 'Challenge Me'.

Simon Lucy
Thursday, December 12, 2002

Of course a programmer should know at least one architecture ("assembly language") but preferably several. Otherwise you have never seen what a pointer looks like or even a stack. Sure, you will have high-level constructs to cover up so you never need to see them again -- but I believe it to be *necessary* to have used them to properly understand the high-level ones.

Joel's 'Leaky Abstractions' paper was very good. When your high-level abstraction fails you need knowledge of the lower level ones as well. How else are you going to find them if you can't read your debuggers disassembly? I believe this knowledge is crucial to properly understand the high-level stuff as well.

Jonas Bofjall
Thursday, December 12, 2002

"Of course a programmer should know at least one architecture ("assembly language") but preferably several. Otherwise you have never seen what a pointer looks like or even a stack."

I am confused. Is there something specific about pointers or stacks that you can't learn without learning a programming language?

Practical geezer
Thursday, December 12, 2002

1) Beside the "oh, every programmer should know about Assembly and C or else you don't know sh--" snobbery, does anyone use that knowledge day-to-day any more?

<snip>But, in 2003 (almost), is it a reasonable expectation that I'll actually write any Assembly (for example)? Ever? What about C?

I use assembly and/or C daily, but I program embedded systems, which is not what most posters here appear to be involved with. Usually, I program *small* embedded systems. Often with a whole 25 bytes of RAM and 512 instructions max.

The benefit of assembly language is not so much in knowing the assembly language itself, it's that programming in assembly rubs your nose in the gritty details of a specific machine architecture. Knowing a machine architecture gives you a better idea of how things work, what things work well, and sometimes what won't work at all.

The people who most need to know this on a professional basis are embedded programmers and games programmers (both of whom need to take advantage of everything an architecture provides to get maximum speed and/or minimum size), and writers of compilers and operating systems. Others can benefit from the knowledge, but don't necessarily have the same need. For programmers using higher-level languages who don't need to program "on the bare metal," perhaps the JVM or something similar would be the corresponding architecture to know.


"I am confused. Is there something specific about pointers or stacks that you can't learn without learning a programming language? "

Maybe, maybe not. Expanding a bit on my 'architecture, not assembly' point above, different machines handle pointers different ways. Taking a couple processors I'm at least somewhat familiar with:

The Motorola 68000 has 32-bit address registers, and memory is flat. That is, adding 1 to a byte pointer gets you a pointer to the next sequential memory location. Pointers are simple and straightforward.

The Intel 8086/8088 has pairs of 16-bit registers that get combined to produce a 20-bit address (segment:offset). This means that memory is not flat. Each byte in memory can be addressed by 4096 different segment:offset combinations. Adding 1 to a pointer may point to the next sequential memory location, or it may not.

Now, without knowing such details about pointers, how could you effectively debug an address arithmetic problem on the 8086? If an array expanded beyond 64K bytes, would you know that that was the cause of the sudden significant slowdown in your application? Could you tell that it would be an issue porting an application from either processor to the other?

As for stacks ... some processors have fixed-size stacks, some don't. Some operations are more expensive when performed on stack-based variables. Again, it depends on the processor architecture. There have been processors that didn't have call stacks at all ... the calling code would store the return address in a specific memory location associated with the called routine, then jump to the routine. The exit code in the subroutine would load the address in the specific location back into the program counter. Naturally, you can't use recursion with such a scheme.

If you don't know a programming language, you might have some idea of what pointers and stacks are. If you know a programming language that provides them, you'll probably have a better idea, and will know what you can do with them. If you know one or more machine architectures, you'll have a better idea of just what they are and how they work. You'll also probably have a better idea of the costs and benefits involved in them, how they relate to the rest of the machine, and maybe even when and how to exploit them better.

Steve Wheeler
Wednesday, December 18, 2002

Do learn an assembly language.  Not because you'll actually use it, but because of the concepts involved.  It will give you a better understanding of what goes on with the hardware, and that understanding will make you a better programmer overall.  You don't need to become an expert in it, just learn enough to do something basic and exercise various aspects of the instruction set.

In addition to Assembly language, every professional programmer's brain should include some working knowledge of :

- A language with pointer manipulation (eg. C, C++, Ada)
- An object-oriented language (C++, Java, Eiffel, Smalltalk).
- A GUI builder (Visual Basic, Delphi, Powerbuilder)
- A relational database with SQL and stored procedures. (Oracle, Sybase, Microsoft SQL, PostgreSQL)

Of course, there is no guarantee that you will use all of those things, but each of those categories encompasses an important conceptual framework, and those concepts are important if you want to be able to adapt quickly to changes in the field.  You know what you are using now and what you were using in the past, but no way in hell do you know what type of work you'll be doing in 10 years from now.  Once you know something from each category above, almost any new language will be the same old concepts bundled in a new syntax.

And after you know something in each category, become very very good in one of those categories, and you can go far.  Be a Jack of All Trades, and a Master of One.

T. Norman
Friday, December 20, 2002

*  Recent Topics

*  Fog Creek Home