Fog Creek Software
Discussion Board




Can "test cases" be used in all circumstances?

Extreme programming teaches us that we should write "test cases" or "unit tests" first, then write the code that needs to be tested.

After that, we could just test the code, and eliminate all mistakes.

When modifying the code in the future, we can test the code automatically, and see if it breaks.


This sounds very good .. in theory.

The problem is, there are lots of code for which you can't write automated testing routines.


For example, there may be code that displays information for the user.

There may be code that takes some data from the registry, then computes something, and shows it to the user.

If I erase that parts of the registry to write some test data, then I mess up my Windows OS really well.


Also, writing tests to cover all ways in which a certain piece of code could go wrong can, in my opinion, take a lot longer than the time needed to test the resulting code by hand and fix all the bugs.

For example, if I write a piece of code, and then run it to notice failure, and then fix the 2-3 failures found, that may take X minutes.

However, writing another piece of code which runs the original piece of code, and which checks for the 30-40 ways the original piece of code can go wrong, may take 10*X minutes.

And also, for writing automated tests, I have to make sure that the original piece of code is written in such a way, that it's easy to test it automatically!

If the original piece of code is very hard to write, adding an additional constraint ("make the code in such a way so it can be tested automatically") may make it almost imposible to write.


So, automatic testing code seems to be worth it when:

- the program developed must change continuously, and if you write automatic testing code, you can recover the huge time investment put into writing them

- in environments where corectness of the code is very important, even if that has a high cost


Are the things written above true, or am I missing something?

George
Wednesday, August 06, 2003

The short answer to your question is "No, probably not".

The longer answer is that you need to decide if your time is best spent writing tests or debugging code. You also have to work out if writing tests helps you keep you code correct and if it means that you deliver a higher quality product to your clients/users. Only you can know which works best for you.

Also realise that you don't need 100% coverage. Test driven development doesn't have to be a binary thing. You'll probably reach a point where the tests take too long to write or too much code needs to be adjusted to make it possible to test a piece of code. As soon as it stops working for you you can stop. Stopping doesnt make the tests that have been useful any less useful to you. Do as much as you need to do.

In fact, I quite like "testable development" - getting to 1% coverage. Just getting to the point where you can test a class in isolation means you can test it if you need to. Often it doesn't take much to get to the 1% stage. You can write the tests when you really need to; can I write a test to exercise the piece of code with the bug faster than I can run the app and step through it as a user to set the conditions just so. If so, write the test and debug whilst letting the test drive the code...

Much of the code that you think is untestable is testable if you spend a little time thinking about decoupling it from the rest of the code. Actually thinking about that tends to leave you with better code anyway, even before you start testing. If you do the decoupling then you get to the point where you can test if you want to. You can then decide what is cost effective to write tests for and what isn't. So far I haven't done much with GUI testing.

Anyway, I've blogged about my testing experiences here:

http://www.lenholgate.com/archives/cat_testing.html

And I'd recommend reading Bob Martin's new book which explains the design advantages of testing first far better than I can. The amazon link is a little long so I'll link via my mention of it on my site:

http://www.lenholgate.com/archives/000134.html

Len Holgate (www.lenholgate.com)
Wednesday, August 06, 2003

George,

Some of your points are valid. As Len says, you don't have to have tests for every single function.

However, don't underestimate the power of having an environment where your code is continually tested; where you are getting a return on the time it took to write tests for months or even years.

Quick example: I wrote a fairly involved set of classes for my client. I spent a day writing a series of unit tests that check to make sure the classes are able to persist themselves to Oracle, retrieve the information, yada yada yada.

Over the past year or so, there have been at least two separate incidences where I made a seemingly simple change to the system that wound up causing my test cases to fail. Had it not been for NUnit and my unit tests, my changes probably would have been shipped out to production with the bugs intact. Spending that one day writing those unit test saved me the great deal of time it would have taken to identify the bug after the code went live.

Start with testing your major components first and I'll guarantee that before long you will be hooked on writing unit tests for any code that is reasonably complex.

Mark Hoffman
Wednesday, August 06, 2003

Well, I think you've missed quite a bit.

I think you misunderstand UnitTest, as that term is used in the TDD community.  That's not your fault really - the word choice was rather unfortunate.

Testing is not the goal of Test First.  Design is the goal of Test First.

So the notion that you'll have to write 30 or 40 tests for all those edge cases is really a non-starter.  What you normally write are the tests for three or four behaviors that you care about, plus tests to eliminate any bugs that you introduce along the way.  I would only expect Machiavellian coders to hit that 30 test mark (for a single unit).

The only times I come near that line are when I'm trying to retrofit UnitTests onto an existing code base. 

There's an important point there; don't miss it.  The work required when you write each test first is different from the work required to put tests on later.  You can, when implementing features without maintaining tests as well, take yourself down ratholes which require practice to get back out of.  Disciplined TDD tends not to go down those holes in the first place.

In other words, TDD from the start is much easier than TDD after you are already 100Ks lines of code "in the hole".

The economy of time is a concern; but TDD is again a change of habit.  10x effort for a test that you run but once would be excessive.  10x effort for a test that you run every 5 minutes is a significant savings over doing the same test manually.  Providing that your test offers some non trivial return, you are going to make up the investment on volume.

Yes, there are things that cannot be done Test First: non-deterministic behavior is one.  There aren't very many others - if you can ask a computer to do it, you can almost always ask it to check what was done.

Your example of the registry is not one of them.  You either automate the registry [which should not be challenging], or you create a design that allows you to use some data source other than the registry in your testing.

Danil
Wednesday, August 06, 2003

WRT testing the registry: you may get excellent mileage out of mock objects. Depending on the language you're using and the flavor of your framework they work in different manners. But generally you have an interface for the tasks you wish to accomplish (e.g., 'Registry.addKey()', 'Registry.deleteKey()', etc.) and then setup a mock object against your interface and fill it with expected results.

It sounds like a lot of work and it's kind of counterintuitive at first. But one of the great side effects is that you wind up creating components that can be easily mocked, which means they can be easily replaced by another implementation, glued to another library, etc. This makes for wonderfully loosely coupled development.

And for many commonly used resources (like Java servlets) someone has probably already written a framework to take care of most of the mock setup work. (Those evil open source programmers, making your life easier!)

Chris Winters
Wednesday, August 06, 2003

You need to emulate things you can't directly control.  I write a lot of device drivers.  Twice I wrote drivers for chips that didn't yet exist (an ethernet controller and a SCSI controller).  I spent a few weeks getting a good handle on how I thought the chip would work, and wrote code to emulate it.  Then I wrote my driver.  In both cases it worked very well.  Not to mention, when your alpha-release silicon comes in the door and things break, you're finding silicon or documentation bugs as often as code bugs.  It's pretty satisfying to get release notes that contain workarounds you came up with.

I always try and write a test driver to emulate other parts of the system.  If I can emulate complex things like chips, I can emulate just about anything. 

snotnose
Wednesday, August 06, 2003

George,

Your perception of unit testing seems to be something like "write a test for every bit of code, and hit it in all the ways that could possibly fail".

That's a pretty extreme position.  And not at all what unit testing and TDD are about.

Unit testing means writing enough test code to verify that the target code works correctly.  The general guideline is to add tests until you feel comfortable that a change to the tested code's behavior will break at least one test.  So, you are likely to have no tests for simple things like getters/setters; for many methods, you will likely have only a few tests; for a few methods, you'll wind up with several or even many tests.

TDD simply means writing the testcase before writing the implementation.  This seems backwards, wrong and senseless.  But it works.  It works especially well in smart IDEs like Eclipse which tell you on-the-fly about what's wrong with your code.

Unit testing and TDD definitely require a different mindset.  But there is little that compares to knowing that you are free to make changes -- perhaps even radical experiments -- and know that if you screw up, the tests are probably going to catch it.  Compare this to the standard "Oooh, I'd like to fix this, but I don't know what I might break... dang, better leave it alone".  Or handing something to QA that you have little confidence in because it's virtually untested.

-Thomas

Thomas
Wednesday, August 06, 2003

Check out this page: http://www.ammai.com/downloads/TDDEclipse_viewlet_swf.html for an intro to TDD using Eclipse.  Should give you a little bit of a feel for what it's like.

-Thomas

Thomas
Wednesday, August 06, 2003

TDD doesn't test code for failures, it tests code
for moving along the design. If you want to test
failure cases that is extra.

The order in which you do things doesn't matter.
You know what you want to do so write the code,
test, doc all at the same time. It all stems from
knowing your next move.

valraven
Wednesday, August 06, 2003

"The order in which you do things doesn't matter."

I disagree, for two reasons. First, is if you write your tests first, it automatically scopes your work (and helps flesh out design issues). Second, once you've written the code, then it's no longer a black-box test. You have the implementations in memory or at hand, and will tend to write the tests to mimic the code you just wrote.

Brad Wilson (dotnetguy.techieswithcats.com)
Wednesday, August 06, 2003

"I disagree, for two reasons."

Those may be good reasons for you, but i don't
find them compelling, so i don't do it that way.
I find my mind is capable of handling this cognitive load.
I also often do mind simulations rather than use code as working memory.

I just want to point out to potential dogmatists  that there's
more than one way to do it.  Spice to taste.

valraven
Wednesday, August 06, 2003

This "but we can't test everything" argument about unit testing keeps coming up.

So what if unit testing only catches 70% of your bugs - that's still 70% you don't have to find some other way.

The danger is in assuming your code is flawless because all the unit tests pass. I don't think even the most extreme EvolutionaryDesignThroughTestFirst zealots believe that.

Andrew Reid
Thursday, August 07, 2003

"I find my mind is capable of handling this cognitive load. I also often do mind simulations rather than use code as working memory."

I would've said these two things, too, before I started doing TDD. Now I realize I was deluding myself, even though I was POSITIVE (and believe me, I did a much better job at it than most others did).

I hope you're not deluding yourself, too.

Brad Wilson (dotnetguy.techieswithcats.com)
Thursday, August 07, 2003

No i am deluding myself. I am able to think through
things and code at the same time. Perhaps you
are deluding yourself?

valraven
Thursday, August 07, 2003

I think valraven is right, at least in this sense: if you know the test you are going to write, and the implementation you will use to satisfy the test, it really doesn't matter how you sequence those two tasks.

While I do intend to learn the discipline of test first, at the moment it is at least somewhat erratic.  I often found I have just written a bit of implementation without first having written the test.  I'm proceding in small chunks, so there's no strain in keeping all the parts in my head at the same time.  The tests aren't difficult to write - because I already know what they are - so the reversed order doesn't carry any perceptable penalty. 

On the other hand, much of the code I'm writing is in the middle of a vast expanse of legacy code - trying to craft tests to deal with that has been painful.  But it is definitely a separate problem.

The main benefit to creating the test first is that it helps to specify the interface to the implementation.  It's the lazy effort optimization at work - doing the simple thing in the client postpones the effort to the provider, and ensures that the interface is kept simple for other clients.

But if writing good interfaces is already a habit, I don't suspect that order is quite so important, and have no evidence one way or the other.

As for the black box nature of the tests, that appears to be a complete non-starter.  I base this on my observations of watch the TDD group attack problem exercises.  Even among the Names, you can see the details of the implementation when only the test has been written.  I don't bring this up to defend the practice, but to point out that ordering alone doesn't cure it.

Danil
Thursday, August 07, 2003

*  Recent Topics

*  Fog Creek Home