Fog Creek Software
Discussion Board




Test Outsourcing-2

Another question- what are the possible problems that I will face when I am outsourcing *testing*. And how do I take care of those*if I can*? How will I be able to determine that- the vendor I GOT is the vendor I WANT? Is there a possible checklist?

SS
Thursday, February 05, 2004

Don't use that firm you were thinking of. I'll do it for half the price and give you just as good service.

Useless Lazy Thief
Thursday, February 05, 2004

I'll half the price that ULF is promised, and will double the quality and speed!

.
Friday, February 06, 2004

I’ll do U n even beter deal. U choose me, as i am very skilled in this sort of thing.  I have plnty of exp. Just ask i will give U refereneces, just ask.

. Chick
Friday, February 06, 2004

if you have time to test your software, you have time to add new features....

FullNameRequired
Friday, February 06, 2004

I have experience with this sort of thing, so I have some advice for you:


NEVER choose the least expensive firm, in anything. This is dangerous.

If possible, choose a firm with proven testing experience.

Ask them what kinds of reports they will provide.

Give them about 30% of the money upfront, and the rest when they complete the testing.


If your product is small enough, it is best to outsource to an individual, and not to a company.

You will pay less for the individual, because an individual doesn't have to pay some taxes like the profit tax, etc.


When doing module testing or early alpha / beta testing, it is best to use only one tester. This way, the easy to find bugs won't be reported multiple times.

When doing final testing (release candidate stage), use multiple testers, and award an important bonus to the company or individual who finds the most bugs.

It is best not to let testers know about each other.

They must know that you are using multiple testers, but not know who their competition is.

When you award the "most bugs found" bonus, announce the winning tester that he got the bonus, and announce everyone else that another tester has got the bonus.

Don't show any testers the bug list found by other testers.

If you have a pool of testers, get rid of the one who consistently finds the least bugs, and hire another one.

MX
Friday, February 06, 2004

"When doing final testing (release candidate stage), use multiple testers, and award an important bonus to the company or individual who finds the most bugs."

What an excellent idea! To take the idea even further, combine individuals and firms and see how the results are.

Is it ridiculous to create a cost per defect (weighted on severity) to evaluate the relative cost between the vendors?

For example, if I hire an individual who finds three high, five medium and twelve low severity bugs for a cost of $8000. A firm finds six high, fourteen medium and twenty low severity bugs for a cost of $30000. With a weight of 3, 2 and 1 for the severities high, medium and low respectively, you could then calculate the cost per bug as:

Ind: $8000 / ( (3*3) + (5*2) + (12*1) ) = $258.06
Firm: $30000 / ( (6*3) + (14*2) + (20*1) ) = $454.55


...or perhaps this is like counting lines of code?

m
Friday, February 06, 2004

What you are describing is a complicated process.

It will usually be obvious: let's say you use 3 testers. You will get 3 reports.

It will usually be obvious who is the best tester, and who is the worst.

Why have a formal process, and perform calculations, for something where intuition works better and faster?

MX
Friday, February 06, 2004

True, but having come from a large corporation, those that pay and those that evaluate performance are not one in the same. I think it is critical that people grasp how their money is spent. I know developers are too optimistic, so you need to somehow marry developer's common sense to the business.

m
Friday, February 06, 2004


"...or perhaps this is like counting lines of code? "

It's definately like counting lines of code.  If I have one tester who only finds one bug, but that bug happens to be that when loading a file that has a multiple of 37 bytes it crashes the computer, you may give that a high priority.  I mean, assuming a uniform distribution of file sizes, you can assume that one in 37 files will crash the computer.

On the other hand, I have another tester that finds 15 GUI inconsitencies in remote sections of the program, which could be considered low level.  Maybe one in 100 users will see these inconsistencies.

By your math, the second tester was 5 times more successful than the first. [(1x3) compared to (15x1)].  Maybe this is the case, but maybe it's not.  With the above example, there isn't enough context information to determine which the more important bug is.  And it often won't be determined until after the product ships. 

The better thing to do in this case is recognize a tester's strengths.  Only if one tester can be near a subset of another tester or two, should you consider abandoning them.  If the two are mostly orthagonal to each other, then I'd say its comparing apples to oranges.  Who's more valuable?  Someone that is a successful GUI tester, or someone that is a sucessful memory tester?

Elephant
Friday, February 06, 2004

If you're just counting on someone finding bugs by playing around with the program, you are asking for trouble. You need test plans, especially if you are outsourcing. I would ask the testers to write test plans for a section of the program, have the developers review the test plans for completeness and make sure the testers understand how the program should operate, and then give the test plan to a tester other than the one who wrote it.

By the way, expect the guy who writes the test plan to find all the bugs while writing it, not the guy who runs it. Why, a good tester will write a test plan with the program open in front of him/her and be trying it as they go along.

Another value of having a test plan is that you can re-run the test for regression purposes.

pdq
Friday, February 06, 2004

*  Recent Topics

*  Fog Creek Home