Fog Creek Software
Discussion Board




Testers...  Who needs 'em anyway?

Just trying to draw attention to myself.  Did it work?

But the real questions:

What do your testers do? 
. Do they test functional test only? 
. Do they write test harnesses and automation?
. Do they understand the code base, or not?  If so, to what degree?
. If your product is based on standards, do they test against standards and RFCs?
. To what extent do they test outside "black box" testing?
. If its client/server, and they are testing client, do they reinvent a server strictly for test purposes, or do they start with the product?  How about vice-versa for testing server?

Any other thoughts on testers?  I look forward to the interesting comments that this place generates!

Nat Ersoz
Friday, June 14, 2002


>>> What do your testers do?
>>>. Do they test functional test only?

No.  Mostly it is functional testing, but also stability (e.g., let it run over the weekend and see what happens) and performance tests.

>>>. Do they write test harnesses and automation?

Yes, they may.  For any system that interacts with an external system, there may be a simulator of that external system.  Testers may write these or developers may (or both).

>>>. Do they understand the code base, or not? If so, to what degree?

Not to any great extent.

mackinac
Friday, June 14, 2002

One place I worked with testers didn't turn out too well (IMHO).  The majority of the programmers eventually embraced the belief that they weren't responsible for testing of any kind, including unit and system levels.  A clean compile was soon good enough to give to the testers to "do their job".  Needless to say the "defects" went sky high afterwards.

Joe AA.
Friday, June 14, 2002

I've mostly worked at a small companies, so very rarely has it been someones job to just "test" software.  Instead, we usually give our software to our marketing department and or managers, and see if they can use it successfully.  Its funny, because all of us would assume we have it perfect, and we'd do all kinds of outlandish tests, and the marketing guys and bosses (non-technical) would always seem to break it.  :-)

Vincent Marquez
Friday, June 14, 2002


Search about "The Black Team". One could learn a thing or two.

Leonardo Herrera
Friday, June 14, 2002

The most interesting set-up I've seen was a relatively small company that had a great big room permanently full of casual uni-student testers. There was a whole different dynamic in the testing room, with their own schedules and management, and students coming and going all round the clock.

They were paid and rewarded to invent ways to break the applications and boy were they good. We developers would just get good little summary reports, meticulously documented. It was a great system and the company delivered very stable software for financial trading, at quite high prices.

The testers had no contact with the development effort or code base, and had no need for this either.

Hugh Wells
Friday, June 14, 2002

My company makes software for the CAD industry. I'm a technical writer, usually writing product documentation, but sometimes I write the test scripts. Our latest software is supposed to be released some time within the next month or two (our official launch date was originally scheduled for February, so we're quite a bit late).

Anyhow, we don't have any dedicated testing staff, so the results of our testing efforts are somewhat haphazard. For example, we just finished a round of testing in which the tests were performed by anyone who didn't have anything better to do. So the testing crew consisted of secretaries, graphic designers, HTML guys, etc. None of these people are experienced users of CAD software, and their understanding of this particular software is limited to a general understanding of its purpose rather than a detailed understanding of all its functionality. Consequently, they tend to miss a lot of bugs.

In many cases, these testers report a successful test whenever they complete their tests without an error message or a crash. A lot of the time, it looks like the software is functioning perfectly. But if you actually examine the data, there are subtle bugs all over the place. For example, when drawing a red line in a CAD file, it might appear with a color of "FF0000" when it should have been "FF1111".

Since our software manages a tremendous amount of CAD information in a database, errors like this are the most common, and the most troublesome. But they tend to go unnoticed by our testers.

As the person documenting this software, I'm more familiar with it than anyone else outside of the development team. Even though I've never looked at the codebase, I'm the best tester we've got because I understand all of the nooks and crannies of the system. So, during the last 2 weeks, I've been performing the tests alongside all the other testers, and I reported about three times as many bugs as any other tester. Plus, my bug reports include all of the details about my operating environment, the complete sequence of events leading up to the bug, and speculations about what might have caused the problem.

In addition, whenever I encounter a bug, I retrace my steps five or ten times, slightly changing my actions each time, trying to isolate the exact behavior that causes the bug. None of our other testers have gone to this type of trouble. They are mainly concerned with getting through the test as quickly as possible so that they look more productive. Their bug reports tend to sound like "I tried to draw a line and it didn't work." Bug reports like this are pretty useless for developers.

Needless to say, I avidly support the idea of having dedicated testers. Dedicated testers know the ins and outs of a system. They remember what broke the system last time and they can keep checking on those issues until they're fixed. When they find a certain type of bug in one area of the system, they can go looking for that same type of bug throughout the rest of the system. They know what types of data validation tends to be forgotten by the coders. They learn how to write good bug reports, and how to speculate about what's wrong with the code. Looking at the codebase is, in my opinion, completely unnecessary. Testers just need to be very very smart about the system and about software in general. Having graphic designers and secretaries doing the testing is almost as bad as having no testing at all.

Benji Smith
Saturday, June 15, 2002

Despite my support of having a team of dedicated testers, I could never be one myself. I personally find software testing incredibly boring. Nothing makes me want to stick my head in the chipper more than reporting the same bugs over and over and over again in every conceivable testing environment.

("...oddly enough, I found the same bug in Windows 95, 98, NT, 2000, and XP using three different user accounts and four different browsers. maybe its a data issue...")

Benji Smith
Saturday, June 15, 2002

In addition to being a big fan of dedicated testers, I'm also a big fan of automated testing.

I believe that testers should primarily be testing the interaction of the user with the program's GUI. Most other testing can be automated using many different clever means.

For example, I just finished writing an application that lets us perform automated tests on our server. Under normal functionality, there is a client program that sends HTTP requests to a particular URL with variables in the querystring. The server replies with an XML document containing the information requested by the client.

The testing platform that I've written lets the test designer (me) write a test in XML specifying which variables will be submitted for each request. Each request also includes a reference to a schema that's used to validate the server's response. You can string as many of these tests together in a particular order to make sure that the server is accurately recording and reporting the information in the database. The server has an API with 192 different types of requests, and each API request can take up to 12 different variables as function arguments.

My tests are very easy to write, and we can perform a complete test of the server in less than ten minutes. If any of the tests fail, the testing software spits out a detailed bug report that can be examined later. The software also keeps a log of every test ever performed, which API calls were used in that test, which ones passed and which ones failed, and a descriptive error message for each failure.

It took me a couple of weeks to write the software to perform the tests, and another week or so to write the XML test scripts, but now we can run these tests hundreds of times, which will be very useful when we develop the next version of the server (that is, if we EVER get this version out the door).

Benji Smith
Saturday, June 15, 2002

Let me mention a scenario that DIDN'T work.  This might shed some light on tester motivations.

We had a team of ~20 coders, and two or three testers.  They messed with code coverage tools & the like.  At the first sign of company trouble, one left.  His reason was that he wasn't learning anything about coding.

Combine this with the current nasty job world.  A lot of bright programmers can't find jobs to learn from.  They can easily write test harnesses once the APIs are known. 

They should be on the developers' mailing lists, so they can learn about tools the developers use.  For example, in Java they would normally be writing JUnit tests.  But suppose a programmer starts experimenting with Jython.  Jython is perfect to write these JUnit tests in.  So they get to learn a new language (Python), and get higher productivity as well.  That's worth a lot.

gringo
Saturday, June 15, 2002

We have a fantastic QA department here - I will say that for SURE. I work with them as a developer to iron out the defects, and I welcome their input into finding things that I have missed!

That being said - sometimes when testing is implemented defects *do* go sky high, because the project managers take the time that was ordinarily used for development and unit testing and shlepp it over into formal QA testing. So the timeline never accounts for that extra time and thus...the situation you described.

Just my experience :-)

Bevin Valentine
Wednesday, June 19, 2002



I'm a Tester and I do a lot.  Functional tests are only part of what a tester should do.  I do write test harnesses, unfortunately, there's little automation where I am.  I do understand code base to a general degree.  I do test against standards. I'm an advocate of white box testing.  Unfortunatley programmers sometimes think testers are "beneath them" or too stupid to understand anything technical which holds us back.  The more I know about a program, system or code, the better I can write test cases.  I encourage ALL of you programmers out there, YES THIS MEANS YOU!  To provide your QA testers with pseudocode or flowcharts of your program logic.  This will help the tester build better , more valueable test cases and reduce low risk testing.  And for the guy who wouldn't be a tester if his life depended on it... I can't tell you how strongly I feel otherwise.  I am a career tester and I so much enjoy finding the bugs than being the one who created them.  > ; )

Regards,
Jennifer Keys

Jennifer Keys
Friday, June 21, 2002

I am a tester in company look some times what happen is like 4 pepole are in testing and 2 of them  gets the same error but 2  of them is not geting thats a diffrent thing we canot say that if they  are get  we should also get the same error in same time  so we should be open for very thing in our product in black box and white box too ..

and i want saome tips to be masters in testing

aditya sachdev
Monday, March 17, 2003

I enjoy free form testing since I used to do a lot of functional testing. I don't enjoy writting a lot of documentation because it takes away from my testing time.

warm regards,
zaki

zaki
Tuesday, May 25, 2004

*  Recent Topics

*  Fog Creek Home