
|
Testing - Best Practices
I was recently hired by a small software company that has little to nothing in the way of established testing procedures. I'd like to turn the situation around as soon as possible.
Pro-XP developers favor so-called "test driven development". Test cases are written first as a way to define the expected behavior of a piece of software before it is written. Of those of you who have experienced this, does this work in the wild? Do you ever run into the problem of test cases being so complex that they're bug-prone too? Does this promote test cases that are too simple to catch enough errors?
What are your opinions on JUnit and its ports in other languages?
How much of your testing is automated? How much has to be done "by hand"?
What sort of testing documentation do you find helpful, both in the pre-testing phase as well as for documenting completed tests?
How closely do your testers work with the developers? Do they have an intimate knowledge of the code being developed, or do they favor black box testing?
TestyInDenver
Sunday, October 5, 2003
"Of those of you who have experienced this, does this work in the wild?"
Yes. In fact, it works very well.
"Do you ever run into the problem of test cases being so complex that they're bug-prone too?"
I haven't yet. The point being, obviously, that the test case should be simple to verify as being correct. As soon as the test case is no longer simple to verify, then you need to refactor it into simpler tests.
"Does this promote test cases that are too simple to catch enough errors?"
You aren't looking for 100% coverage. You're looking for pretty good coverage, and when you find a bug that has slipped through, you back-fill to a unit test that indicates the bug before you fix it. (The general steps are: find the bug, isolate a simple unit test to reproduce it, write the unit test, run to verify it fails, fix bug, run to verify it succeeds.)
"What are your opinions on JUnit and its ports in other languages?"
We use NUnit, and love it. There are commercial testing frameworks that have more features that I would consider, if it came to the point where we needed them.
"How much of your testing is automated?"
All of it.
"What sort of testing documentation do you find helpful, both in the pre-testing phase as well as for documenting completed tests?"
The tests are simple enough that either the code is self documenting, or a couple lines of in-code comment are used to indicate the exact condition being testing. Intelligent method naming helps here, too.
"How closely do your testers work with the developers?"
Unit tests are always written and run by developers. Unit testing must succeed before the code can be handed off to functional testers.
Functional testers generally work more closely with the customer to ensure that the functionality matches customer expectations. Unit tests are too low level for this.
"Do they have an intimate knowledge of the code being developed, or do they favor black box testing?"
Unit tests are always black box (you write them first... how can they be anything else?). Functional testing is also black box, as it's application interaction. They understand the app, but not the code (necessarily).
Brad Wilson (dotnetguy.techieswithcats.com)
Sunday, October 5, 2003
Oh, and I'm in Denver as well. Peter Provost has mentioned the desire to start a Pragmatic Programmers group in the area, and some other bloggers have shown interest as well (Greg Reinacker, Gordon Weakliem are a couple names that come to mind). There are a few of the XP people who live here, like Lisa Crispin.
I think I'll re-rattle the trees on that one, see if we can't get something moving...
Brad Wilson (dotnetguy.techieswithcats.com)
Sunday, October 5, 2003
Well, we do XP as well. I was going to comment, but Brad's pretty much said all that needs to be said.
Currently, the test coverage from out unit test runs about 80%. What's missing? Mostly tests on the user interface.
Brad, do you guys have tests for the interface as well, or do you consider that functional testing?
anon
Sunday, October 5, 2003
We consider UI testing to be functional testing, mostly because there's no really good UI testing libraries for ASP.NET yet. If something came along that was useful and automated, we'd probably use it (to test low level stuff, like web controls).
Brad Wilson (dotnetguy.techieswithcats.com)
Sunday, October 5, 2003
What are your opinions on JUnit and its ports in other languages?
I write unit tests for most non trivial classes.
In unit testing you can only write tests that prove your method or class works the way the test thinks it should.
How much of your testing is automated? How much has to be done "by hand"?
Load, stress, and unit testing is all automated. After the initial creation scripts.
What sort of testing documentation do you find helpful, both in the pre-testing phase as well as for documenting completed tests?
I copy other people's tests and change where needed.
How closely do your testers work with the developers? Do they have an intimate knowledge of the code being developed, or do they favor black box testing?
They do functional testing only. They have no clue as to the code behind a system.
Tom Vu
Sunday, October 5, 2003
Can someone recommend favorite books or articles that discuss the techniques outlined in this thread?
Right now, our unit testing is not as structured and rigorous as I'd like, and we are depending too much on functional testing to catch bugs in all layers of the app. I'd like to figure out how to get NUnit into the mix, but I'd like to "read up" on some best practices before I throw something else into our process.
Thanks!
Dave
Dave
Sunday, October 5, 2003
I'm just begining to move over to the NUnit (and NAnt) systems, and I found these two articles helpful for understanding the overall process:
http://216.87.26.21/aspNUnit2WithVB.asp
http://www.ftponline.com/vsm/2002%5F12/online/smith/default_pf.aspx
--Josh
JWA
Sunday, October 5, 2003
Some points already raised, but my experiences regarding your questions -
"Do you ever run into the problem of test cases being so complex that they're bug-prone too?"
Nope. I believe that if that happens the granularity of your methods and / or tests is too course. Try doing less in a single method or test less in a single test. Also, how would you go about proving the validity of such a complex test in the first place?
"Does this promote test cases that are too simple to catch enough errors?"
Well, in a certain sense yes, but only in the beginning. It seems that most people (well, me :) tend to think in overly simplified terms about the functionality (this is where bugs come from anyway). Bugs are, by definition, special cases one didn't think of, so you'll not be testing for these in the first place. This evolves out of the system by the time bugs are being found. As you write tests to cater for the bugs, the usefulness of the TDD setup grows.
"What are your opinions on JUnit and its ports in other languages?"
NUnit rocks.
"How much of your testing is automated? How much has to be done "by hand"?"
Automated = 100% otherwise it is basically useless.
"How closely do your testers work with the developers?"
Unit tests are written by the developers. Our testers do functional testing, working the UI and so on.
A short note on the usefullness of TDD:
A lot of people seem to think that a test is sort of a copy of the source within a method, and as such, pretty useless. I've often heard people say things like "What is the point if I rewrite my method implementation in a unit test? That would just duplicate the bugs bla bla bla". These people should be shot, methinks ;)
They are missing the point. The point is that you should be writing verifiable tests, not rewrite the method as a test. If you know a certain set of parameters *should* return a certain thing, this is a test. If you calculate something on paper based on the problem, it becomes a test. If you expect an exception in certain conditions, make this a test.
When is this useful? Not really right at the beginning of a project. But everytime you make a little change, refactor something, change something fundamental, do *whatever*, the tests help to ensure that everything is still going OK, and that no subtle dependancies are broken. The bigger your system gets, the more useful they become.
I am interested in something you mentioned: you say that they don't have a testing framework in place. I assume that you have a lot of legacy code to work with. Have you made any plans as to how you will go about adding tests to the code? XP proposes refactoring to "make it your own". And in my experience, adding tests to inherited code is a pain. So I'd like to know how your plans go re lagacy code :)
Gerrie Swart
Monday, October 6, 2003
> When is this useful? Not really right at the beginning of a project.
Disagree. The tests drive the design. You get a much better design from being forced to make the code testable from the get go. You are forced to decouple the code from the rest of the system so that it's testable, for me this means that I tend to end up nice cohesive loosely coupled classes. In fact I'm starting to think that the last D in TDD should be design...
The fact that you end up with tests that are useful for later regression testing and refactoring is a convenient side effect ;)
I've been babbling on about my experiences of TDD on my blog recently; http://www.lenholgate.com/archives/cat_testing.html
Len Holgate (www.lenholgate.com)
Monday, October 6, 2003
Len,
Seems I was a bit unclear. I was trying to refer to the usefulness in refactoring etc. (I gathered from the original post that most work with the testing would be within an existing framework). As such I didn't really refer to the value of TDD when working on a virginal project, with design still evolving :).
I definitely agree that TDD leads to *much* better design. Testing from the beginning forces one to think in terms of the actual inputs, outputs, side effects, assumptions that went into it and so forth. The test becomes a special slightly-higher-than-the-implementation-level description of what is required of a module / class / method / whatever. Since it is interactive, test act as a sort of "initial user" for your system.
So. Good point, and I agree. Sorry about clarity issues.
BTW (unrelated) on your blog, something else I agree with: "When it's late in the day and you get a hard thing to do, go home. You'll do it faster and better tomorrow." Definitely. Learnt this the hard way.
- Gerrie :)
Gerrie Swart
Monday, October 6, 2003
Gerrie,
Understand and agree with you. The tests are less useful at the start when you're trying to build them around existing code so that you can then refactor. After all, all they should do is pass ;) They also tend to be a lot harder to write because the code wasn't designed with testing in mind...
Re the "go home" comment; I wish it was easy to convince people about this. Then again, some teams/companies seem to thrive on those late night, emergency, fire-fighting, bug thrashing sessions; and I don't really want to work with them... ;)
Len Holgate (www.lenholgate.com)
Monday, October 6, 2003
Josh:
Thanks. I'll check these out.
Dave
Dave
Monday, October 6, 2003
>I haven't yet. The point being, obviously, that the test
>case should be simple to verify as being correct. As soon >as the test case is no longer simple to verify, then you
>need to refactor it into simpler tests.
If this actually works then why not do this for all your
code and dispense with the tests?
somebody
Monday, October 6, 2003
">I haven't yet. The point being, obviously, that the test
>case should be simple to verify as being correct. As soon >as the test case is no longer simple to verify, then you
>need to refactor it into simpler tests.
If this actually works then why not do this for all your
code and dispense with the tests? "
IMO, often you should. Tests add value to the extent that the behavior is simple relative to the implementation.
If I write a function for the logistic iteration, it's simpler to verify that the implementation is correct by inspection than it is to verify that any test of its behavior is correct by inspection. Tests waste time there. Simple implementation, complex behavior.
On the other hand, testing pays off when you have functions which have complex implementations but exhibit simple behaviors. As a simple example, think of lossless storage, compression, retrieval, and decompression. This can be a very complex process, but the behavior is extremely simple: you want exactly back out what you put in. In this case, testing pays off extremely well.
Identifying such trade-offs is the "art" of testing.
Robert
Monday, October 6, 2003
I strongly recommend the NUnit AddIn. Right-click to test. A real timesaver.
Thomas Eyde
Monday, October 6, 2003
"If this actually works then why not do this for all your code and dispense with the tests?"
The tests are what give you the freedom to refactor the code easily into its simpler components. Obviously. :)
Brad Wilson (dotnetguy.techieswithcats.com)
Monday, October 6, 2003
"The tests are what give you the freedom to refactor the code easily into its simpler components. Obviously. :)"
Oh, come on, that's complete baloney and you know it, if you ever actually tried it. Rather I think you are blindly chanting a mantra after reading too many books by Beck and/or Fowler.
If you have a function X with tests, that could be split into functions Y and Z, you either have to write new tests for the intermediate inputs and outputs, or you only test the combination of functions Y and Z together, without testing the intermediate inputs and outputs of the functions.
And if you've only tested the functions in aggregate, then what's to stop someone from using function Y or Z alone, which is by itself untested?
Nothing. Which is exactly why only functions that are re-used should be factored. You should NOT factor for the sake of giving names to every little sub-operation in a function. That leaves you wide-open with untested mini-functions that people will be tempted to use, or it creates an unecessary and significant maintenance burden for writing tests for all of those intermediate states that you haven't found a need for yet anyway.
Either way, gratuitous factoring is a loser. Factor on repetition and not before. And don't give me any phony lines about how many lines a function should be. Complete bunk.
Robert
Tuesday, October 7, 2003
"Oh, come on, that's complete baloney and you know it, if you ever actually tried it."
Well, I appreciate you trying to step in and tell people what I have and haven't do in my life. Thanks. Because, you know, what with me here having all these expereinces, they must've been planted by an evil memory ray or something.
Brad Wilson (dotnetguy.techieswithcats.com)
Tuesday, October 7, 2003
> Nothing. Which is exactly why only functions that are re-used should be factored. You should NOT factor for the sake of giving names to every little sub-operation in a function. That leaves you wide-open with untested mini-functions that people will be tempted to use
Disagree.
Pulling a complex conditional out into a one line function with a name that conveys the why of the conditional is always well worth doing; IMHO. Doing it before you realise there's duplication helps to stop duplication occuring; people WILL be tempted to use the functions you've created - that's a good thing... Often duplication doesn't look like duplication at the time - the complex conditional may be coded slightly differently but achieve the same ends...
Yup, you end up with lots of small functions, each does one thing. Since each does one thing they're easy to write tests for... Then again you'd only bother to write tests for them if you needed to. You don't need 100% coverage, you need just enough coverage.
Len Holgate (www.lenholgate.com)
Tuesday, October 7, 2003
"How much of your testing is automated?"
Virtually all of it. It's another one of those things that sounds like a massive waste of money.....until a bunch of hotfixes start showing up.
Jason McCullough
Wednesday, October 8, 2003
There are only two places where this kind of testing is *obviously* the best bet:
1) You’re refactoring and need to make sure that the same inputs create the same outputs;
2) You’re checking for careless code that doesn’t do what you expected. This is the major use of JUnit testing and the reason it got a reputation for only being useful for “trivial” items.
Then there are the other situations, where what you’ve done is syntactically correct but, like the blue monkey that beneficially ate the stupendous truck, doesn’t actually make any sense.
It’s nice to get the details correct – in fact it’s essential – but don’t confuse correct syntax with correct code
Celia Redmore
Sunday, October 12, 2003
Recent Topics
Fog Creek Home
|