Fog Creek Software
Discussion Board

Unit Testing Embedded Systems

I'm trying to push for a more modern development process in our company. There's a group here doing embedded firmware that I feel could use some help in this area.

I seee a lot of good examples of how unit testing helps in development, and I've started using it myself, with good results. However, I've been looking for information or hints as to how to unit test embedded code, and I'm not finding much.

How do you apply unit testing when the code you are testing is so difficult to communicate with?

Ham Fisted
Sunday, April 25, 2004


Sunday, April 25, 2004

Some things that have helped:
1. be able to warm boot so the tests run faster.  This assumes a flat address space. If not, never mind.
2.  have an osencap layer so you can develop and run code on unix or windows. Not helpful for drivers, but works for most everything else.  Most code can be developed on the host and cross compiled to the target. It usually just works.
3. be able to run your tests from the command line.
4. use expect or something similar to drive running the tests.
5. make your boot image loadable over the network.
6. have different data sets that your tcl script can setup to
test the various setups.
7. do load testing now, not later. Make them create a test
bed that mimics your largest customer.
8. make sure your test equipment is programmable.
9. insert counters, the ability to reset them, logging, etc now.
10. you'll need ways to get CPU usage, save the exception to eeprom, look at locks, look at queue sizes, etc.
11. use design by contract
12. make sure the equipment can be access remotely over a
serial port.
13. run smoke tests on every build. You unit tests are a good start.

son of parnas
Sunday, April 25, 2004

These all sound like good ideas, however I'm not too sure how applicable they are to a system which this one guy (who is more politically powerful in the company than any of us programmers) has written, basically all in C just above bare metal. Apparently he evaluated an RTOS as some point and found it lacking for some reason.

We have been pushing for better development and testing practices, and he appears to be very resistant to change, and influenced a reorganization so as to put the maximum amount of distance between our software group and his own work. It's hard for us to have influence anyway since the software "group" is a couple of kids out of college, myself included.

The current notion of "testing" is to have a guy spend a week with the in-circuit emulator running through a checklist. This is only done at release. Up until a short while ago they weren't even using version control.

Would you like me to scare you a little more and say that we are a medical devices company?

Perhaps I should rephrase the question as, "How can I get things done in a project I have no influence over?"

Ham Fisted
Monday, April 26, 2004

The most effective method for testing is to use an emulator normally.  I develop for handheld computers that require an upload taking a minute followed by a 20s reboot...

A fair chunk of code can be tested with a simple scaffolding type program that compiles to a seperate executable on the development machine for testing.  Certainly all basic string handling routines etc. are generally easily handled in this way.  For more complex things you may want to create some dummy functions to emulate the library functions you normally call.

For my project I created a scaffolding type program for doing unit tests on the string handling and easily tested code.  I then created an emulator of the basic API I normally develop for and can now debug and test on my devlopment machine.

The emulator thing may not be of use depending on your actual circumstances but small programs that include your various stand alone libraries are always a good start.

Colin Newell
Monday, April 26, 2004

>Perhaps I should rephrase the question as, "How can I get >things done in a project I have no influence over?"

You will be like gravity on an electron.

son of parnas
Monday, April 26, 2004

Ham: I don't know where you are, but I have worked in Medical Devices (if you ever see an EEG monitor called a Neurotrac II, I worked on it). If your product goes back as far as I suspect, then the reason for rejecting an RTOS may have been as simple as it costing too much. Today there are many fine solutions at reasonable prices. 10+ years ago, however, the situation was different: Any RTOS suitable for a medical device was VERY expensive. For a small company in the medical biz the cost could seem excessive.

Even if your product doesn't go back that far, your people might. Don't be overly quick to judge things unless you know how they got that way - sometimes perfectly reasonable decisions live FAR beyond their original impact, to the point where they seem to make no sense. Understanding how a thing got the way it is may give you clues as to how to go about changing it.

Tangent: Is the lack of testing in this product directly affecting you? Is it affecting your customers? Or do you just want to see everything around you done to the best it can be? If it's affecting you, make sure your boss is aware. If it's affecting your customers, you may have to have an anonymous word with the FDA (Medical is NO place to screw around shipping shoddy product, depending your exact device.). If you just can't stand to see anything less than it could be - lead by example.

If you have to call the FDA, make sure it's a last resort situation - they have a LOT to do, and way too few people as it is.

Michael Kohne
Monday, April 26, 2004

"Beware the frumious emulator!"

While I agree that having an off-target environment to debug and test code on is a good thing, remember that it isn't the same as running on target.  I like to use an off-target environment for verifying things like state machines and algorithms where the outcome should be predictable and linear.  On target testing is usually reserved for the real-time aspects of the system.  Here you are trying to ensure that your software correctly maps to the behaviour of the hardware and that things you can't test in the OTE (processor loading for example) are working as desired.

Monday, April 26, 2004

Another approach is to fully document the API of the firmware, along with any expected side effects.  Take all your non-firmware code and ignore it.  Instead, write a suite of tests that exercises the firmware.  I've found I can write/debug 90% of a device driver without actually using the hardware.  If doing time-critical stuff then you can just dump stuff to a log and parse it later.

Of course, you need to ensure this test code *never* finds it's way into the field.  Google for "always mount a scratch monkey".

Monday, April 26, 2004

*  Recent Topics

*  Fog Creek Home