Fog Creek Software
Discussion Board




Unit testing and project velocity

I first started doing unit testing about 7 months ago.  I've gotten into the habit of writing unit tests for almost everything I have coded.  In general, I really like the idea and the benefits that unit testing provides (decoupled design, knowledge that a change didn't break any other part of the system, etc).

However, over the past month or so I've done a few various things without writing unit tests and I have noticed how much faster I work this way.  I would estimate that I spend at least twice as long on a task if I write unit tests for it.

According to http://www.extremeprogramming.org/rules/testfirst.html -

"The combined time it takes to create a unit test and create some code to make it pass is about the same as just coding it up straight away."

This hasn't been my experience.  Has anyone else successfully added unit testing to their process without slowing down the project velocity?  If so, I'm curious to see if anyone has any suggestions on how to prevent unit testing from taking so much extra time.

Thanks!

T.S.
Thursday, February 27, 2003

If the code is straight up symbolic processing, the kinds of things people do with smalltalk, it's not so hard to write tests... just create a few input strings and compare them to the output strings.

The trouble is when you're testing a GUI, or a web page, or any kind of complicated system involving multiple moving parts. Then test driven development is just too expensive and impractical, and I wouldn't bother with it.

(That's what my essay is about in this month's STQE magazine. Sorry, not available online, and I don't think the printed version has come out yet).

Joel Spolsky
Thursday, February 27, 2003

The numbers you use for time depend heavily on what you are including in that time.  If you are talking about the first version of code to go out the door, writing it straight up is probably faster, maybe even significantly so.  The time savings will be most apparent, though, when you start factoring in other things that are required to make the code production worthy. 

In manufacturing, each step has four time components: queue, setup, process, and wait.  Typing the code is analogous to the process step.  Writing unit tests first helps to minimize the others, especially as you factor in rework.

For example, it's probably faster to write 10 units tests at once than to catch ten bugs after the code is developed and do the associated debugging, because you only have to do the setup once in the first case, but ten times in the second case.

Lazy Coder
Thursday, February 27, 2003

My experience has been along the lines of TS. Writing unit tests takes more time, I don't care what the XP folks say.

However....Having unit tests have saved my butt on many occassions. On a current project written in C#, I'm using NUnit to run my unit test for each class. After modifying some code, it turns out I had introduced a bug via a "ripple effect". Because I routinely run NUnit on all of my unit test, I caught this bug the same day and changed the offending code.

Without NUnit, it might have been weeks before this glitch became obvious and it would have taken a lot more time to fix it.

Go Linux Go!
Thursday, February 27, 2003

Writing tests plus writing code will obviously take more time than just writing code.

However, writing tests + writing code will take less time then writing code + debugging code.

And then when you get into version 2, you get a whole new kind of speedup, since you're not afraid of breaking stuff on the way since you've got your tests.

You need to take the whole picture into account.

Chris Tavares
Thursday, February 27, 2003

I've gotten into the habit of writing a test module as an internal part of any application I'm developing. 

I usually use a command line switch to trigger it.

As I build functions I add a test for it to my application test module. This really works well for me and has nipped many a potential problem in the bud.

I hate, yes hate, overly formalised unit test procedures, they are not worth the time investment. Typically they are a big bang for a big buck, whereas my way is a large bang for a small buck.

Realist
Thursday, February 27, 2003

I've found that if you write a *really good* set of unit tests for your core set of data structures and algorythms, it will save you huge amounts of debugging time later on.

I'd bet it's of much more limited usefulness for stuff where the interaction is more open-ended.

flamebait sr.
Thursday, February 27, 2003

Also, don't lose track of the fact that the TDD crowd is using UnitTests as part of their design effort, so be sure to include your time sketching class heirarchies and so on.

Also, the payoff depends a lot on how much time is spent refactoring - the TDD crowd spend very little time adding code compared to the amount of time they spend cleaning things up.

Danil
Thursday, February 27, 2003

Joel Spolsky said:

"...just create a few input strings and compare them to the output strings."

I think this is a naive view of how this kind of testing is supposed to work.  Ideally, if you are using objects, you are not testing the result of calling a function.  Instead you are testing the dynamics of the object that a method is bound to.  This is done by creating a set of post conditions for each command (state changing method/subroutine) that makes some assertion about the value of every query (result returning method/function) visible on the object's public interface.  This will capture, and test, much of the dynamics of the object and ensure that the object's state changes as expected. 

Also, if you are careful about writing specification, you can often translate specifications directly into tests, so there is the added comfort of having an automated compliance test against your specs.

This notion is really just an adaptation of the techniques of Design by Contract to an environment that doesn't support it directly.  If you study those techniques, you will see that writing good tests can be a powerful way to ensure quality and fidelity to your specifications under many circumstances.  It's also a great way to uncover leaks in your abstractions.

As far as not using it for GUIs and the like, it probably isn't a reasonable approach yet.  But there's nothing wrong with testing the state model that the GUI is viewing properly.  This leaves a small amount of untested rendering code and whatever crazy scripting code you've inserted into the rendition.

At the end of the day, the decision of what and whether to test should be based on the economics of the situation.  If the product is a one-shot that won't be supported, testing is probably overkill.  If you plan to evolve the thing over many releases and provide robust field support for those releases, it might be a very attractive risk hedge.  If tight compliance to your specs is a requirement, it might be worth investing the extra time to build a robust test suite that captures system semantics.  If the specs are kind of loose, then basic tests might be your best bet.

jim rigney
Friday, February 28, 2003

Regarding testing GUI's, I just noticed that someone has extended NUnit to work with ASP.NET. I haven't tried this yet, but it looks like an easy way to check the outputs of areas on the screen.

http://nunitasp.sourceforge.net/

Go Linux Go!
Friday, February 28, 2003

Joel Spolsky said:

"...just create a few input strings and compare them to the output strings."

Jim -- Sorry, but I think Joel gets to the essence of unit testing here in a useful way, more useful for most of us than the academic-gobbledygook-speak in your own explanation.  (Even though yours also may be true.)

The only thing I might change about Joel's is that you aren't just creating input and comparing output strings, but whatever datatypes are used by the object methods you're testing.  And he doesn't point out what specific cases you'd want to "create a few input strings" for.

But other than that, I can't see how what he's saying is much different from -- except for being clearer than -- what you say:

"This is done by creating a set of post conditions for each command (state changing method/subroutine) that makes some assertion about the value of every query (result returning method/function) visible on the object's public interface." 

Joel's description tells you how to test the "dynamics" of an object as well as yours, although perhaps not how to "capture the dynamics", whatever that means.

anonymous
Friday, February 28, 2003

Unit testing has never been the end to QA, it simply is a means to test the functionality of your code and make sure it does what you think it should do. It will save you time because you eliminate some of the potential bugs up front, instead of down the line in QA or even worse, after release.

GUIs are powered the same way that command line applications are, there is really no difference in testing the core functionailty of your app by calling the classes that power the GUI, whatever that may be. This can at least ensure you that your core classes are working, especially when you make a change.

Personally I cannot write perfect code. If you can find some other way for me to ensure that my code is working as I intended I am all for it. I don't like to spend hours tracking down bugs because I changed something in one class and the effects rippled throughout my code. Does anyone have a better suggestion to lower the chance of bugs appearing?

Ian Stallings
Friday, February 28, 2003

I don't really agree that GUI development is just a "thin layer of pretty stuff" on top of functionality.

In my app, CityDesk, the functionality accounts for maybe 10% of the code; the other 90% consists of creating an elegant and usable user interface.

Example: we have a big pile of code that insures that when you switch between WYSIWYG and Source/HTML view, the cursor or selection position is preserved.

This is fairly gnarly code to implement, but testing it with any kind of automated framework would have to happen at the GUI level. It's ALL gui.

So in CityDesk if we did extensive TDD or unit testing, it would only cover maybe 10-15% of the lines of code.

Joel Spolsky
Friday, February 28, 2003

As a follow up question, how do you decide how thorough a unit test should be?  Creating tests for every possible combination of inputs and every single error condition takes a very long time.  How do you decide where to draw the line?

T.S.
Friday, February 28, 2003

anonymous,

Thanks for the input.  I think you're right that I wasn't very clear; I probably shouldn't write stuff like that so late at night :)  I still stick with my assertion, though.  Pumping data into a function and testing the return values is only the most basic use for unit tests.  The amount of information you get back is a minimum.

What I was trying to say was that unit testing is a much more powerful tool than the naive view lets on, and I tried, apparently unsuccessfully, to illustrate the point by talking about post-conditions and all the associated "academic-gobbledygook-speak."  Sorry that I couldn't have been clearer.  I think I'll avoid taking another crack at it here, but maybe I'll post something a little more clear later.

The main message, though, was that the decision to develop unit tests should be made rationally, based on your best understanding of your situation.  Making blanket statements on either extreme is no substitute for understanding the constraints and requirements of your problem.

jim rigney
Friday, February 28, 2003

Joel beat me to it, but I agree that a GUI is more than just a thin layer.

In my current project, most of the code is wrapped up in classes and has a whole suit of unit tests run against it.

I then have an ASP.NET GUI that makes calls to the classes. While most of the logic are in the classes, there is still some "GUI logic" that can't be wrapped up into the class. When we find a bug, most of the time it's in the logic of the GUI and this area is a whole lot harder to test.

Go Linux Go!
Friday, February 28, 2003

Thats exactly why I said "This can at least ensure you that your core classes are working, especially when you make a change. "

I simply meant that you can test the underlying layer, as you already do.

Ian Stallings
Friday, February 28, 2003

"Example: we have a big pile of code that insures that when you switch between WYSIWYG and Source/HTML view, the cursor or selection position is preserved."

I think what you're telling us is that your state data is not sufficiently decoupled from "the GUI stuff" for it to be testable.  That's usually indicative of deeper design problems - which may, in fact, be foisted upon you by limitations or poor design of the GUI libraries you are using.

Your test for cursor position and selection should be trivial.  Your state for cursor position and selection should simply be invariant after your view change.  Oh, what's that?  The data to describe "the same" cursor position and selection is a function of viewing mode?  That's a design problem.

This is a pretty universal correlation - difficult to test systems are often poorly designed systems.

Robert
Friday, February 28, 2003

Unit testing is no substitute for good a QA process and plain old common sense.

Ian Stallings
Friday, February 28, 2003

I agree on the granularity @ which unit testing is helpful. But there is another option. In my current project I will be using an MVC layer which is itself unit tested. Then I'll use httpunit or jwebunit to automate the acceptance tests.

This use of httpunit or jwebunit would also work with any system, whether or not it was written in Java, I might add.

Scot (the first)
Friday, February 28, 2003

T.S.

This book is one of the best I've found at answering questions like the one you raise without getting into a lot of mathematics:

http://www.amazon.com/exec/obidos/asin/0471358460

The author has another book called "Lessons Learned in Software Testing" that is also very good, though it is probably targeting a different audience.

jim rigney
Friday, February 28, 2003

*  Recent Topics

*  Fog Creek Home