Fog Creek Software
Discussion Board




Test Team Chaos

We're in a real pickle here.

A whole wave of testers have walked out to better paid jobs because we don't appear to be paying market rates for them. D'oh.

We're supposed to have four testers on board but can't hire any new ones to make the numbers up because we haven't got the budget to offer them a competive salary.

This is hitting everyone hard. We're supposed to have four testers. Every project is now late because it's stuck in the bottleneck for the sole remaining tester to pass it through QA, who just can't cope with the strain.

Does anybody have a good idea for getting out of this? I suppose it's possible for developers to test each other's projects that they haven't worked on as a temporary measure, but not all developers make good testers...

Better Than Being Unemployed...
Wednesday, October 22, 2003

Hire two testers instead of three and split the last tester's salary among the other three to make the salaries more competitive. At least you get three good testers instead of the existing one and hopefully accelerate the hiring process by offering a reasonable salary.

Gerald
Wednesday, October 22, 2003

Quit!  =-)

Alyosha`
Wednesday, October 22, 2003

>Hire two testers instead of three and split the last tester's salary among the other three to make the salaries more competitive. At least you get three good testers instead of the existing one and hopefully accelerate the hiring process by offering a reasonable salary. <

But, what if the original tester ends up being paid less by comparison and finds out?

Homer
Wednesday, October 22, 2003

>But, what if the original tester ends up being paid less by comparison and finds out? <

I did say to split the one salary among the three other testers and thus it would include the original tester.

Gerald
Wednesday, October 22, 2003

Incidentally, do that and you've got at least one QA guy for life. A 33% raise out of the blue? He'll be happier than a pig in ... well, you know what I mean.

Philo

Philo
Wednesday, October 22, 2003

Hire six guys offshore and keep the remaining tester as the lead tester/manager/liason with the offshore guys.

pdq
Wednesday, October 22, 2003

Since you have what appears to be excess development capacity, you can either pick the best tester of that bunch and turn him/her into a tester, or lay a developer off and use the money towards testing resources.

pragmatist
Wednesday, October 22, 2003

Take all the testing money and divide it up amongst the programmers. Tell them since they are getting paid more, they should produce less bugs.  :)

m
Wednesday, October 22, 2003

Do like Netscape, and just upload the latest build to the web site, and let users do the testing? :-)

Frederic Faure
Wednesday, October 22, 2003

This sounds like a good time to introduce automated test-driven development:

http://c2.com/cgi/wiki?TestDrivenDevelopment

Briefly:  Before you write new project code, write a bit of code to test that code.  Keep all of these test in a central repository, and ensure that everyone's running the tests as new code is developed.  Running the tests can be part of the "make" process.  As a result, you'll be much less reliant on testers.

The Pedant, Brent P. Newhall
Wednesday, October 22, 2003

Brent, automated tests (even very good automated tests) traditionally find about 15% of the (discovered) bugs in a piece of software.

Writing automated unit tests is a great practice and should be encouraged. But believing that doing so means you need less dedicated testers is almost certainly wrong.

Give that software that has passed all of the automated tests to a good test team, and they'll still rip it to shreds.

Mike Treit
Wednesday, October 22, 2003

"...we haven't got the budget to offer them a competitive salary."

Look at it this way: Is there room in the budget for a competitive product?

Caveat
Wednesday, October 22, 2003

>>But, what if the original tester ends up being paid less by comparison and finds out? <

I did say to split the one salary among the three other testers and thus it would include the original tester. <

Oops, my bad. My brain must have been stuck in neutral.

Still, Philo brings up a good point. Now for another question: is that original tester really worth the 33% increase in pay?

Good points made about test-driven development. But is there time to refactor the code so you end up with (relatively) simple test methods?

Also, does anyone know of any open-source utilities that support automated testing?

Homer
Wednesday, October 22, 2003

Incidentally, it bothers me that "give the existing QA guy a raise" wasn't mentioned in the original post (mind you, hopefully they did and it just wasn't topical).

Has anyone ever seen a situation where a chunk of staff quit and the remaining staff are given a raise in consideration for picking up the slack? I have NEVER heard of it.

Philo

Philo
Wednesday, October 22, 2003

>>Has anyone ever seen a situation where a chunk of staff quit and the remaining staff are given a raise in consideration for picking up the slack?

Actually had it happen to me once - about a decade ago.

One of my team quit and my boss didn't plan on hiring a replacement (amount of work coming in didn't require it). He convinced the home office to distribute 1/2 of the salary to the 4 of us remaining. It was a nice mid-year raise and kept us happy.

He was able to convice them to do this since the guy who quit left because of the money (and made no effort to hide it).

RocketJeff
Wednesday, October 22, 2003

Thanks for the comments.

Er, to be frank, the remaining tester is the worst one out of the original group. Which isn't really surprising, since all the good apples have worked out they're worth more elsewhere. In my opinion, a good tester should understand how to reclean a PC in order to regression test from scratch or manually add patches to test hotfixes etc, without having to be hand-held every single step of the way. Grrr.

I like the idea of hiring less people than we originally asked for, since I hope just getting one good apple will spur the rest of the team on.

Anyway, I don't run the team so I can merely suggest, but I think if push comes to shove something will happen before the business tanks.

Better Than Being Unemployed...
Thursday, October 23, 2003

Mike:

I was about to post about TDD but Brent beat me to it.

Mike's comment has *NOT* been my experience.  Going back to "Writing Solid Code" (Steve McGuire), there are all kinds of ways to 'bake' better code so that there are less bugs for QA to find.

My advice is to find ways to 'bake in' Quality into the development cycle that don't suck.  Turning your coders into traditional, manual testers, well ... sucks.

TDD, Agile Methods, Pair Programming -- Remember, because QA is now the bottle-neck, your programmers can spend time doing other things, like researching and experimenting on how to do quality themselves.

Of course, that reality will probably be ignored; that's not what your management wants.  What they want is to find a way to lose the bottle-neck.

In that spirit, remember - salary is only part of cost.  BENEFITS are where the real bucks are.  So going from 4 to 3 QA guys gives you the extra salary to pass around PLUS the benefits the 4th guy would get to convert to cash to pass around.

So, that's my advice:

-> Convert 4 to 3
-> Back in QA at the programmer level

Good luck!

Matt H.
Thursday, October 23, 2003

Programmers do not usually enjoy the idea of QA-ing.

In addition to everything that has been said I'd advice to get to the root of the problem. When people start leaving in big bunches it might mean problems with working conditions/salaries, company attitude.

Probably before they left they tried to point out on something...

Just make sure that other "good apples" won't figure out the same thing for themselves.

Plumber
Thursday, October 23, 2003

Matt:

I don't disagree at all that good programming practices can reduce the number of bugs!  Of course that's the case, and those practices should absolutely be emphasized.

The point I was making is simply that automated testing is not a panacea that reduces your need for competent testers. The reality is that automated testing finds a minority of the bugs - 15% is a number that is often thrown around (there have been various studies showing that is a good average), but I have worked on projects that had lots of good automation, and our post mortem found that only about 1% of the bugs that were uncovered in the project came from automation.

The reason isn't that the automation was bad, but that our test team was very good and knew how to hunt for bugs.

BTW, Chapter 5 - "Automated Testing" from the superb book "Lessons learned in software testing" talks about this and other related topics - highly recommended for anyone who is interested in testing best practices.

You can implement near-perfect test development practices, and your software will still have lots of bugs. That's just the reality of building complex software. A good test team will find those bugs - automated tests won't.

Developers don't like to hear that, but it's true.

Mike Treit
Thursday, October 23, 2003

Mike Treit: Can you please point to some of the studies you mention?  Your claims do not match my experience, and I'd like to see some others' real-life numbers.

The Pedant, Brent P. Newhall
Thursday, October 23, 2003

I've seen the 15% number cited in numerous places:

Lessons learned in software testing - p. 101 (Caner, Bach & Pettichord)

Software Test Automation  - p. 23 (Fewster & Graham)

It's also mentioned as coming from the non-online version of Bach's Test Automation Snake Oil - see the comment at http://www.testingcraft.com/regression-test-bugs.html

I seem to recall seeing similar numbers cited in other places as well, but I don't have the sources off the top of my head.

Based on my own experience, 15% is actually high - and I've worked on a number of large software projects.

Of course the number will vary widely with individual projects, so whether or not it really averages out to 15% is not so important. What's important is to realize that test automation is not a particularly good source for finding new bugs (exploratory testing, test development, code reviews, etc. are all much more effective.)

The bigger value of good automation is to save you massive amounts of time in doing repetitive regression testing later in the product cycle.

Mike Treit
Thursday, October 23, 2003

Brent,

On our current project, automated unit tests are finding about 10% of the bugs that are found. And if our use of defect seeding is working right, they're finding about 5% of the bugs that actually exist. Where they do seem to excel is in finding regression bugs.

This isn't so surprising when you realise that developer and testers perform rather different testing tasks. For example:

Developers:
- From inside working out - focus on code
- Assertions - verify data flow and structures
- Debugger - verify code flow and data
- Unit testing - verify each function
- Integration testing - verify sub-systems
- System testing - verify functionality
- Regression tests - verify defects stay fixed

Testers:
- From outside working in - focus on features
- Scenarios - verify real-world situations
- Global tests - verify feasible inputs
- Regression tests - verify defects stay fixed
- Code coverage - testing untouched code
- Compatibility - with previous releases
- Looking for quirks and rough edges

Mark
----
Author of "Comprehensive VB .NET Debugging"
http://www.apress.com/book/bookDisplay.html?bID=128

Mark Pearce
Thursday, October 23, 2003

Mark,

"On our current project, automated unit tests are finding about 10% of the bugs that are found. And if our use of defect seeding is working right, they're finding about 5% of the bugs that actually exist. Where they do seem to excel is in finding regression bugs."

I'd like to be nosey and find out more if you're agreeable ...

Are the unit tests for your codebase written mostly prior to component implementation - either as TFD or TDD (I differentiate between the two)?

Is more than one person involved in the combined process of interface development, unit test development and implementation / debugging for a component - either by pair-programming, code reviews, buddy-programming, handover or whatever?

How did you do defect seeding? I ask this as we are planning to do a form of testpoint seeding and I would like to find out more.

What was the sample size for the seeded defects?

Well, that's not much advice to ask for, is it? ;-)

Cheers,

Gerard

Gerard
Friday, October 24, 2003

Gerard,

Sorry about the delayed reply - life's rather busy at the moment.

>> Are the unit tests for your codebase written mostly prior to component implementation - either as TFD or TDD (I differentiate between the two)? <<

I don't mandate any particular method of writing unit tests - it's up to the individual developer - there are 4 developers on the team. I tend to use TDD where it's feasible, but I know that at least one of the other developers writes his unit tests after writing the code.

>> Is more than one person involved in the combined process of interface development, unit test development and implementation / debugging for a component - either by pair-programming, code reviews, buddy-programming, handover or whatever? <<

Non-critical code usually only has a single developer see it and test it before it goes to QA. All of the developers on our team are highly experienced and produce a high quality of code, and we currently seem to get away with this.

Critical code is reviewed by several developers in a formal inspection process. We've found that these code inspections seem to find more bugs than any other technique, including QA.

>> How did you do defect seeding? <<

After the code has been unit-tested, and inspected where appropriate, a developer adds one deliberate bug for approximately 50 LOC.

This isn't as simple as it sounds, as one of the outputs of our product is automatically generated application code, unit tests, build scripts, and installation scripts. So some of our unit tests add deliberate defects to the generated app code and then run the generated unit tests. As you can imagine, this gets a little complicated sometimes!

Mark

Mark Pearce
Saturday, October 25, 2003

Mark,

Many thanks for your response.

I am convinced it's feasible to raise that percentage of bugs found via unit testing to a higher figure by involving several pairs of eyes on unit test development, and doing it upfront with feedback on component interface design (which is what we do).

But having said that, I don't have hard evidence to back this: only "I think this a great idea and so it should work fine" - and I've been down that road before with several other silver bullets in the past.

As an aside, I am becoming increasingly struck by the importance of acceptance tests being used as well as just unit tests (which fortunately we are starting to bring on board). When we have developed our acceptance testing to same level as our unit testing, I may be able to back this faith up with some concrete numbers...

... and should those numbers disappoint?

Well, rollover TDD and TFD - bring on TPW development.

:-)

Regards,

Gerard

Gerard
Sunday, October 26, 2003

*  Recent Topics

*  Fog Creek Home