Fog Creek Software
Discussion Board




Unit testing in C++

I'm both fortunate, and unfortunate, enough to work in games. Fortunate in that it's a incredibly easy going industry, and hey I get to write games! Unfortunate in that it's almost entirely populated by old-skool bedroom hackers who think things like encapsulation, unit testing and code reviews are beneath them.

Anyway, I want to start putting unit testing into a least the stuff I write, but I'm rather vague on the details of how to go about it. Rather than just stumble around blindly until I get it right, I wondered whether any of you knew of any guides or books on unit testing, and specifically on unit testing in C++?

Thanks in advance.

Mr Jack
Tuesday, October 08, 2002

Simple explanation and sample unit testing framework can be found at:

http://www.cuj.com/articles/2000/0009/0009d/0009d.htm?topic=articles

Our build system enforces that all unit tests must be passed before the build will complete.

Enjoy

Darin Buck
Tuesday, October 08, 2002

I suppose you know about the site http://www.xprogramming.com/ , they have info about cppunit and associated doc.

Robert Chevallier
Tuesday, October 08, 2002

Also check out CppUnitLite, which seems to be available at http://www.twelve71.com/grot/ 

It was mentioned in the excellent article on unit testing dialog boxes http://www.objectmentor.com/resources/articles/TheHumbleDialogBox.pdf

IanRae
Tuesday, October 08, 2002

Thanks to you all, those have been most useful. Especially that 'Humble Dialog' one. Not that I use Dialog's much, but it's similar in nature to the kind of problems you have in games.

Mr Jack
Tuesday, October 08, 2002

Excellent question and veyr useful answers.  I dabbled a little into Unit Testing in C++ not to long ago and I too felt I could of been doing it in a much better manner.  (When you write a unit test that takes just as long to code as the code I kinda wonder...)

Lucas Goodwin
Wednesday, October 09, 2002

Lucas -

Your point about writing tests taking as long or longer to write than the code being tested is not as weird as you may think.

I'm a QA Manager (actual "QA", so my scope of interest is the entire SDLC process, of which QC is a component; pretty fair nr of years, and a pretty wide diversity of systems evaluated) and have had this argument with unenlightened developers and project managers -- "how can it take longer to test something than to build it?" they would say to me, right before they blew off my estimates for the effort required to test something.

The answer is "Lots of reasons". One simple one is that there are normally more ways to do something wrong than right.

Somebody creating a system, an author writing code for example, tends to focus on implementing the nominal flows, and tends not to consider the exception flows as fully. I think it's pretty much human nature, especially if you're having to do your own analysis on the fly at the keyboard while you code. Your head is just going to be more focused on making that bit of code return the outputs you need to be able to pass off to the next bit of code you have to write than it will be focused on identifying and handling all the screwball conditions that can arise and figuring out what the proper responses should be. And, the project and schedule pressures tend to be more on getting the nominal behaviors identified and constructed than all the exceptions identified and handled.

Personally, I think this phenomenon is kind of like the productivity "flow" phenomenon I've seen people cite here and elsewhere -- you get into a flow where you're productive, and you try not to break that flow. People posting here who cite the "flow" normally refer to getting a phone call in the middle of coding, for example. Well I think to an extent a similar kind of mental interruption is operative here -- you're focusing on keeping in your head what the system has to do, and can't afford to think too far afield -- at the same time anyway -- about what it's not supposed to do, or you'll lose track of what you have to accomplish.  Now, having seen code from a variety of different developers (and having written my share of crappy code), I've observed that some exception conditions get handled just automatically by experienced authors because they tend to write defensively and follow good design principles -- not necessarily because they explicitly took time to analyze and identify every error condition first so they could explicitly code for it.

Testing, though, whether you're doing it on your own code or if it's a professional tester doing it on somebody else's code, has to consider all of the conditions -- nominal and error.

But, before you can test for all the stuff your system should _not_ do, you have to identify all those exception conditions. Since they tend to be both more numerous and less attended to, the test designer normally has some amount of additional analysis effort to complete first, as well as probably a larger number of tests to create and execute in order to handle all the exception conditions/behaviors than they do to cover the nominal conditions/behaviors. Probably the nominal ones have been identified, and the analysis for them has been done already.

If some of the folks I've debated this issue with could count, they'd realize, for a simple example, that in a system that's functioning according to a combinatorial model, if you have 4 binary variables, then of course you'll have 16 different logical conditions to be considered (e.g. nominal, error, don't care, can't happen). Maybe only one or two of the conditions result in the nominal system behavior and the others are various flavors of error conditions. My experience has been (for my own code as well) that the author will normally get most or all of the objective behaviors, but will often fail to discover some, or maybe even most of the exception behaviors, as I described earlier. The distribution of the number of tests required for nominal and exception conditions generally seems to be proportional to the number of conditions themselves (though depends on things like the number of "don't care" outcomes, etc.)

So in general, for any given system, my observation has been that folks building it tend to focus on what it has to do, while folks testing it have to focus on what is has to do _and_ what it must _not_ do; the latter of which tends to be the larger number of conditions, the larger number of tests, and also tends to not be as well defined by the time test writing has to start. Hence, it's not that odd for it to take as much or maybe even more effort to test something than to write it.

Of course, moving beyond unit testing for a second, there are other factors that can make a test effort be large or even larger than a development effort -- but those are for a different post.

cheers,

anonQAguy
Friday, October 11, 2002

*  Recent Topics

*  Fog Creek Home