Fog Creek Software
Discussion Board




Five Worlds, Embedded Software

Joel talks about embedded developers spending a lot of time optimizing. It turns out that optimizing for small code size may be more common in "deeply embedded" devices like cell phones than the traditional desktop goal of optimizing for execution speed. By squeezing more code into less memory, it becomes easier to provide the necessary product features (code) without increasing design complexity. Extra complexity means higher parts costs, physically larger packaging, and possibly reduced battery life.

On another topic, Joel said that there are no second chances with embedded software. This is not always true. Various digital cameras have had firmware updates recently. Some of them required that the camera be shipped to the manufacturer, but some could just be updated from the company's web page.

It is also becoming increasingly common to update embedded software in the field while it is still running. I heard about one amazing project where the software was updated while the system continued to run uninterrupted. (Think of some computer application that runs 24/7 and cannot go down even for a millisecond.) Imagine the issues involved with updating a function in a multithreaded application where at any time SOME thread is always executing the function you want to update.

nonconsultant
Tuesday, May 07, 2002

There are exceptions for every rule and the worlds can be more than 5 if you consider the in betweens.

Sunish
Tuesday, May 07, 2002

nonconsultant wrote:
"Joel said that there are no second chances with embedded software. This is not always true. Various digital cameras have had firmware updates recently."

That's really nice for geeks like you and me, but I leave it to you to come up with a realistic estimate of the amount of people that will actually upgrade the firmware on any consumer device.

Jan Derk
Tuesday, May 07, 2002

"optimizing for small code size may be more common in "deeply embedded" devices like cell phones than the traditional desktop goal of optimizing for execution speed"

sorry but this is not necessarily true. i know, i've written software for cellphones. yes, the code was required to have a small memory footprint, and yes the code had to run fast. certainly, size was more of an issue than speed, but we still spent time optimising for the latter _where it was needed_ (profile, profile, profile).

nope
Tuesday, May 07, 2002

I am reminded of elementary number theory.

Imagine there is a world between each of the five worlds Joel mentioned, and lo behold there is a world between each of them, and again there exists another world between each of the worlds discovered and so on, you guessed it, all of a sudden there are an infinite number of worlds and the discussion becomes infinite and endless. So why dont we all try to stick to a finite number of worlds and see if we can at least have some finite rational discussion.

Tony
Tuesday, May 07, 2002

Heh.  Well, Joel did cover the case of "worlds between".  He mentioned that the five could overlap here and there.  The only time you'd truly have a new world is when you had something that couldn't be expressed as a combination of the other five.

And of course, shrinkwrap had three subs (commercial, consultware, webware).

I haven't been thinking about these five environments for very long yet, so they haven't quite gelled in my mind.  I'm not yet quite sure what makes them distinct, or whether they're a complete set.  I am, however, finding it convenient to consider some feature or goal or bug fix in a hypothetical program, and assess what its priority would be in the various worlds.

Paul Brinkley
Tuesday, May 07, 2002

Regarding size vs. speed on embedded programs, one characteristic of embedded applications is that memory costs are very nonlinear. It may cost nothing to increase firmware size from 191kB to 192kB. But it may cost a bundle to go from 192kB to 193kB.

So in many cases it might make sense to first optimize for code size, then optimize the inner loops or frequently executed functions for maximum execution speed at the cost of size by such tactics as loop unrolling and so on, until all of available memory is used. And do yourself a favor: read the compiler's assembly output in places where execution speed is crucial. You might find yourself replacing a C/C++ routine with an assembly routine or you might find a way to modify the C/C++ routine so that the compiler produces better code.

a.reader
Tuesday, May 07, 2002

*  Recent Topics

*  Fog Creek Home