Fog Creek Software
Discussion Board




How many testers do you need?

How many testers (per programer?) do you need? Joel suggests 1 testers per 2 programmers. Do you think it is enough? Too much?

JaSikor
Tuesday, April 15, 2003


Depends on the project.

Software that is used in the health care industry may actually require more "testers" than developers. On the other hand, if you're writing a mp3 player, you can probably do with fewer testers.

Bruce
Tuesday, April 15, 2003

It depends on how good the programmers are. 

Some programmers try to thoroughly check their applications for bugs and does extra things like writing unit tests.  In that case, you probably won't need many testers since the programmers are good enough to already catch several of the bugs already.

On the other hand, other programmers just throw their code over the wall to users, testers, management, marketing, etc., and think that if someone finds a problem then they'll just fix it then.  This is when you'll need several testers.  It may be better to just reprimand that type of mentality and just let them go if they continue to write code that gets fixed only when someone breaks it.

HeyMacarana
Tuesday, April 15, 2003


I dunno. I tend to disagree, respectfully, with the previous opinion.

All programmers make errors. What's ironic is that, with better programmers you need better testers. Why? Because the errors that are left are usually a lot more difficult to find.

Also, unit tests are NOT a substitute for acceptance tests. Testers should be focusing on functional errors. That is, errors in implementing the requirements. They should NOT be focusing on execution errors. Unit tests are primarily about catching execution errors.

Bruce
Tuesday, April 15, 2003

The ideal would be if your customer writes your automated acceptance tests, and your programmers write automated unit tests.  In this case, you may not need any testers, or only a very few.

Failing that, you'd still want programmers to be writing automated unit tests, which means you need testers only for acceptance tests, so you still would need comparatively few.

If you're stuck somewhere that has manual tests for everything, you'll need a heck of a lot.  That depends on how thorough your test base is.

Brent P. Newhall
Tuesday, April 15, 2003

Testing is not a thing.  It is not something that can "be done" and therefore a number becomes meaningless fairly fast.  Two testers may be great as changes trickle in, but terribly inadequate when a major project/projects are underway.

That being said, a testing team should be comprised of two parts (or three depending on how you count).  First is regression testing.  This is making certain things that worked before are not broken.  It also needs to be comprehensive.  Action A should produce result X in every case.  When a change is made that Action A now produces "Y", all regression cases concerning both "A" and "Y" need to be updated. 

This is a time consuming task.  It takes an experienced Subject Matter Expert and it requires a tool.  I see too many clients attempting to test with just testers.  They spend weeks in hundreds or thousands of  regression tests. Regression testing should be automated.  The job of the "regression testers" is to review the output, log the issues found, and when necessary update the tests.

Testers, should be used to focus efforts.  They should be testing new functionality, evaluating results,  matching them to requirements and identifying cases to add to the regression testing.  They are focused on "this" project.

This simplifies the effort, but it does require someone who owns the job "Regression testing", understands the system, the client and the tool.  A good one is not going to be cheap because it usually requires you hire someone with industry experience and make them a tester.

Mike Gamerland
Tuesday, April 15, 2003

Mike, good article, but why should the testers have to review the output of automated tests?  As I see it, if a test succeeds, there should be no output (who needs to be alerted that something still works?).  If a test fails in any way (even a minor discrepancy), it should output a failure condition which sets off flashing red lights and loud sirens.

IMO, if a test's output needs to be reviewed by a human, the test should be better automated.

Brent P. Newhall
Tuesday, April 15, 2003

If there is no output when a test succeeds, how do you know for sure that it really succeeded? How do you know it even ran? What if it bombed in some unexpected way halfway through without logging anything?

One of the main goals of testing is to remove uncertainty. A "successful" run of a test that doesn't leave any proof of its success only adds to uncertainty.

A tester
Tuesday, April 15, 2003

A tester asked, "If there is no output when a test succeeds, how do you know for sure that it really succeeded? How do you know it even ran? What if it bombed in some unexpected way halfway through without logging anything?"

Okay, then make the test script print out "Done" after it's run all the tests.  Same difference.

Personally, I know that if my automated test script *does* bomb out, then I'll get some kind of a message (such as "Core dumped").  But IMHO I'd rather do the simplest thing that could possibly work, and have a simple test suite that I can interpret instantly ("No message means success") rather than something I have to interpret.

On the other hand, my current automated testing suite is web-based, so my version of running the test script is loading a webpage, which displays a big list of colored function names.  Green ones have passed the tests, while red ones have failed.

On the gripping hand, when I've used text-based test scripts in the past, I had no log messages; it would just output an error if a function failed.  It worked well.

Brent P. Newhall
Wednesday, April 16, 2003

The reason an automated test needs to show output, even in success cases, is because there can be bugs in the test that don't cause it to "bomb out". I can't tell you the number of times I've seen automated tests that failed in some unexpected way that claimed they passed.

MarkF
Wednesday, April 16, 2003

In that case, in my opinion, the test writers *need* to become better at writing automated tests.

I'm in the same class; my automated tests don't cover as many cases as I'd like them to.  But if your test writers are literally writing innumerable test cases that don't work fully, that suggests that they're not very good at writing tests cases which fully cover the expected behavior of the application.

Hmmm.  Either that or the test framework needs to be improved; perhaps greater automation of "obvious" tests such as "numeric input fields must only accept numbers"?

Brent P. Newhall
Thursday, April 17, 2003

No matter how accomplished a test writer is, automated tests are as susceptible to bugs as any other piece of software. If your scripts don't provide output all the time, and you don't analyze it, you may never know about that subtle bug until your customer sees it.

MarkF
Thursday, April 17, 2003

I see your point, but I also see a lot of conditionals in your statements, Mark.

I'll put it this way:  Please provide some specific examples in which the study of the output from automated tests found a bug that would not otherwise have been found.

Brent P. Newhall
Friday, April 18, 2003

Personally, I think that the best option is to have both output and no output at the same time!

To be more precise: your validation system runs tests, each of which produce a specific set of output; then after each test is run, your system diff's (or otherwise compares) the output with the expected output.

If the output and the expected output match, don't print anything to the terminal (or report or whatever). If things are different, print the differences or just a notice or whatever you want.

This way, for every test you have output that allows your testers to see (a) what exactly is being tested and (b) more easily track down what specifically is wrong while at the same time making it easy to see at a glance if your program passes its validations.

Steven C.
Friday, April 18, 2003

I just had one yesterday. A test was written for an API that renames items in a collection. The API is not supposed to allow duplicate names, and when run, it returned an error code saying that it couldn't rename the item in question because the new name would conflict with an existing name. However, the person who wrote the test neglected to actually look up the items afterwards and verify that what the API was reporting was, in fact, true. In fact, the API did rename the item even though it reported that it didn't.

The test, as written, claimed to have passed, when it had failed, because of an oversight by the developer of the test. Admittedly, the tests were just BVT tests written by the devs, and not complete, test every case possible, tests. I still think that it illustrates my point.

No one is perfect, and no one makes perfect assumptions, thus, no one can write perfect code, even if it's test code.

MarkF
Friday, April 18, 2003

*  Recent Topics

*  Fog Creek Home