Fog Creek Software
Discussion Board




Role of testers

It seems that the role of testers has often been relegated to that of "earliest users".  This has been true everywhere I've worked to some degree.  Testers are often GUI or product users that exercise the product in a simulated deployment setting.

Often, specific tasks get overlooked in the test plan:
1. Scalability (load testing per the previous thread).
2. Standards conformance and interoperability.
3. Code and error condition coverage.

I bet there are more items to add to a "missed list" perhaps?

So, the question is:  what role do your testers play?  Do they merely exercise the product as early users, or do they make a serious attempt at simulating error conditions and hostile environments which the product may face?

Another question: do you bring your testers into the design/review stages of development?  And if so, when?  Ealiest possible?

Thanks,

Nat Ersoz
Monday, November 11, 2002

I prefer to think of "testing" as a skill that spans the continuum of software development.  Just as players of an interactive team sport (soccer, basketball, hockey, etc.) are responsible for both offense and defense, regardless of which position they play, "developers" have a responsibility to test and "testers" should know how to write code (or at least read it).

That said, often external constraints on time and resources result in the reduction and/or elimination of certain types of "specialized" testing such as performance (load) testing, stress testing, usability testing, etc.  According to my company's quality system, the "testers" are supposed to be included in all phases of development, from concept to deployement, but in practice this doesn't happen; the testers are always "behind the curve" testing software that the developers are ready to release to customers.  As a result, the testers frequently only do "happy path" testing, verifying that a typical user's workflow can proceed without error.

p. cuff
Monday, November 11, 2002

In the current economic circumstances testers have become a luxury few can afford.

Any good developer can test, that's a fact. It's too easy to make excuses for 'the' developer not to test his/her own software because they  wrote it and are somehow blind to any deficiencies in their own code. This is nonsense.

If developers in our team deliver unreasonably buggy code they get body slammed by the rest of the team, and then ostracized for as long as it takes for them to 'understand'.

Alberto
Monday, November 11, 2002

"Any good developer can test, that's a fact."

This is a poor strategy on all but the smallest projects.

We know that the biggest problem is managing complexity and that the fewer people that are working on a project, the more efficiently the project proceeds. Thus, keeping the number of developers at the minimum possible given the required timeframe is an important strategy in project management. Having developers do testing means that they do less development. Thus you need more developers. But since its a nonlinear thing, you'll lose more than you save since you'll need more developers than you would save on testers.

Likewise, developers should not be taking out the trash, cleaning their offices, making coffee, or helping the rid his computer of viruses after the CFO downloaded some .exe attachments he found in an email that said 'Check this out - kewl!'

Sarain H.
Monday, November 11, 2002

Developer testing is a poor strategy? Do you really believe that you as a developer can abdicate the responsibility of delivering working code? And if you can't, how do you find out it works unless you test?

"We know that the biggest problem is managing complexity and that the fewer people that are working on a project, the more efficiently the project proceeds."

Ah, I get it. Testers are either non-persons, or they are not working on the same project.

Having a tester do work is not less complex than having a programmer do work. It might be more cost-efficient, especially for exploratory testing - but it does not change your project complexity.

Yes, any second your developers spend testing is a second they don't crank out code - but I prefer working with developers who write code that is actually working /before/ it reaches QA.

"Likewise, developers should not be taking out the trash, cleaning their offices, making coffee, or helping the rid his computer of viruses".

They should probably not take out the trash. Making coffee does not hurt, though - you might get some semblance of social interaction (which is helpful if you want to keep your team sane). It's not like operating a coffee machine is taking out huge chunks of development time.

As for the virus - by all means, the developer /should/ rid his computer, if he can easily do it. I'd rather have him click the 'quarantine' button and spend 5 minutes waiting for the AV program than have him wait for IT to show up after they fixed 300 other machines.

Out of curiosity: What kind of projects led you to your opinions? Are we maybe defining /large/ in different ways?

Robert Blum
Monday, November 11, 2002

Take your blinkers off.

Developers can make coffee. All humans need a break, no matter what the 'hourly rate' is.

We are not Gods.

Get real.

Alberto
Monday, November 11, 2002

Hey if you guys want to cripple yourselves, it's your own business.

Sure, making coffee doesn't take much time. Neither does taking out the trash. Or cleaning out the coffee mugs and washing the plates from lunch. Neither does debugging the CFO's computer virus troubles. A few minutes here, a few minutes there. Hey, if a few minutes here and there adds up to hours, then the developer will just have to work longer into the night and more hours on the weekends, righty-ho? I guess the peon developers at your firms have lots of free time and your secretaries and janitors are completely swamped.

Testers are indeed people and good testers make at least as much as developers.

And of course developers will be doing some testing of their own code.

But there is a level of testing that is separable from development. And this separable element is foolish to assign to the developer.

Why? Let's say you have seven developers and seven testers working on a project that must be delivered in 12 months. (You may need more than 1 tester for each developer if you want to have fewer bugs than Microsoft where the ratio is 1:1.)

So you decide that you will instead use fourteen developers and fire the seven testers.

What have you gained? Well, now because of the larger number of developers, more interdeveloper communication is needed (grows factorially after all) and so now your delivery date is either 18 months, or you can try to deliver in 12 months by adding another ten developers.

Don't you know this is how it works? Or do you have some miracle linear systems methodology that enables you to arbitrarily add developers and get a constant productivity jump?

Nah, you guys are just clueless.

Sarain H.
Monday, November 11, 2002

What I really had in mind was something better than "what Joel knew and when he knew it".

But this is worse.  Sorry, let's drop it.

Nat Ersoz
Monday, November 11, 2002

At most companies I've worked for 'testers' mainly do system testing (scalability, performance etc. and benign user testing), and verifying that all of the product requirements have been met. Usually this doesn't start until the second half of the project since any feedback they give is obsolete until things start settling down.

The developers do the rest of the testing. Regression testing is almost always viewed as an unaffordable luxury. Sometime after reaching full functionality all development (and bug-fixing) is halted for one to two weeks, with all of the developers required to spend that time testing code developed by somebody else. It gets lots of management attention.

Usually the big issue is who the testers report to, since this determines how much independence they have. If they (or thier boss) report to a project manager they're typically a lapdog. Otherwise they typically have a strong voice in determining when the product is good enough.

On the other hand at one company I worked for the testers just did load testing, and there wasn't even a bug reporting system.

Eric Moore
Monday, November 11, 2002

Sarain H,
"This is a poor strategy on all but the smallest projects."

Rubbish, I don't want to go into it, suffice to say I haven't worked on a 'small' (<$0.5M) for over 10 years now.

and

"Sure, making coffee doesn't take much time. Neither does taking out the trash. Or cleaning out the coffee mugs and washing the plates from lunch. Neither does debugging the CFO's computer virus troubles. A few minutes here, a few minutes there. Hey, if a few minutes here and there adds up to hours, then the developer will just have to work longer into the night and more hours on the weekends, righty-ho? I guess the peon developers at your firms have lots of free time and your secretaries and janitors are completely swamped."

Are you on drugs?



Alberto
Tuesday, November 12, 2002

Sorry, you're probably not on drugs, but it really is OK for developers to make a cup of coffee, really, it is, I assure you.

Of course, if they are doing all the exaggerated tasks you mention then that's silly.

Why not add to the list, fixing the plumbing, painting the walls, vacuuming the carpet, going to the shop for lunches, etc.

We get the point.

It is most reasonable to expect professional developers to test, it is most reasonable to expect the testing to be thorough, it is most reasonable to allow the developer to do a range of other tasks that arn't strictly 'coding', to be truthful I think developers only spend about 50% of the time 'coding' and the other 50% doing other development related tasks including, talking at the water cooler, drinking coffee, going for a walk around the block to think, staring out the window, thinking about the deployment script, writing unit tests, investigating possible new solutions, going to meetings, filling out the timesheet, staring into space, etc.

If you can afford testers then you can reasonable expect the product to be better, how much better?

Depends on the quality of your developers, not your testers.

Alberto
Tuesday, November 12, 2002

I think Joel said it best:

http://www.joelonsoftware.com/articles/fog0000000067.html

There is no earthly reason not to have testers.

Mr Jack
Tuesday, November 12, 2002

Sarain was making two good points - that testing is an arduous process that detracts from development, and that it is easy for development to become sidetracked if developers start doing secondary tasks.

Must be a manager
Tuesday, November 12, 2002

i have been a consultant for 10 years. (ouch...that long?) and I do not use outside testers. "there is no earthly reason to not have testers." for a while i wrote MIDI device drivers. what exactly would my testers, test? if it worked? if it didn't work, i wouldn't have been paid. if you are writing desktop apps, where there are an arbitrarily large number of functions that can break, inputs that can break, and system configurations that need to be accounted for, then testers are a godsend. if you are doing something else, having testers might not be such a big help, and most often would not pass the "Joel Load-Testing Cost Analysis Square" test.

no testes
Tuesday, November 12, 2002

Your testers would test whatever you tested, but their tests would not be obscured by your tunnel vision. On a simple project, would you really develop a tunnel vision that would be hurtful? Maybe not. Could it become that bad that you would need a fresh pair of eyes to compensate? Maybe. Would it be economically justifiable? Maybe not.
However, a tester would certainly provide you with a fresh and unbiased perspective. Whether it would be justified in a particular situation, you would have to evaluate based on the particular situation.

Erik
Tuesday, November 12, 2002

"if it didn't work, i wouldn't have been paid."

And they took your word for it? Knowing your stake in it (getting paid)?

Erik
Tuesday, November 12, 2002

"And they took your word for it? Knowing your stake in it "

well, yes.  that is how being a consultant works.

no testes
Tuesday, November 12, 2002

But this wasn't about consulting. You said you developed something for money and then they paid you because you said that it worked. Obviously you would say that, because you developed it and you wanted to get paid for it. Although in a perfect world one could accept one's word - and I am certainly not doubting yours - I doubt anyone really does whenever it is clear that there might be some bias.

Erik
Tuesday, November 12, 2002

Here's a recent experience (today, actually) on how a seperate testing organization can help:

We've had some IIS CGI code in production now for some years (C-based), but recently used it for a new web application.  Our testing group performed a load test on this new code and noticed a performance problem on the web server.  The original CGI coders claimed it wasn't their code's fault; "It's been in production for years and no one's complained".  As it turns out, the CGI code was MALLOC-ing memory and not checking to see if it succeeded.  With the limited unit tests they ran, this oversight wasn't discovered since the code worked as intended.  It wasn't until the testing organization ran it under load, and memory became scarce, that the problem surfaced.

By the way, our management is questioning whether we need a seperate test organization since the perception is they add no value to the project.  Go figure...

Frustrated Tester
Tuesday, November 12, 2002

You may be able to save money on testing.

You can't save money on testers. If somebody other than a tester is doing the testing. then he's going to get paid.

Stephen Jones
Wednesday, November 13, 2002

"There is no earthly reason not to have testers." -- Mr Jack

"testing is an arduous process that detracts from development"
"it is easy for development to become sidetracked if developers start doing secondary tasks" -- Must be a manager

"If somebody other than a tester is doing the testing. then he's going to get paid." -- Stephen Jones

At last a ray of logic in a world gone mad! Thanks guys.

I think there might be a little confusion about what testing consists of. Although a small amount of testing can be done at the end of the project by nontechnical employees emulating users, a tester is fully competant at development and writes code. The code generates files and simulates messages representing program input and caputers the output to make sure its OK. They set up these tests to work automatically. They have read books and taken classes about things like 'critical path coverage' and stuff like that and they examine the code minutely to make sure their tests hit boundary conditions. They work at testing during and throughout development and they spend as much time at it as the developers spend developing. Competant, experienced testers are compensated the same developers because they are just as capable as developers, with the additional background in testing.

If you have have anything more than one or two people working on a project,  a tester is a rational investment. You will get a better project built faster with one developer and one tester than with two developers and no testers.

Sarain H.
Wednesday, November 13, 2002

Sarain H. -

Good thoughts, Sarain. To extend a bit, persons doing quality assurance and quality control should probably be analysts foremost, but having at least many of the skills you also cite. I'm not disagreeing with the level of coding skills you as you describe, but would say that in practice the distribution of effort is not typically on automated test suite coding.

It's important to understand that I do not mean 'analyst' in the same over-used generic sense in which HR folks apply it who don't know what the hell else to call some position. The term 'engineer' has sufferred the same abuse -- the classic I recall is referring to somebody on the office janitorial team as a 'sanitation engineer'.

No, when I say analyst, I'm talking about the kind of person who's been trained specifically to design and setup experiments, for example, though it's much more than just that. A background in Operations Research / Systems Analysis (or something similar), for example, seems especially well suited to this because the discipline includes many skills and 'ways of thinking' that are key to ultimately demonstrating that a system:

    - performs its assigned functions in all respects, and
    - exhibits its required attributes in all respects.
    
In practice, this involves a significant portion of effort in problem definition, requirements analysis and decompostion, before one even gets to actually designing/creating a test suite.

The importance of such analysis preceeding test design/creation is also suggested by testing texts (one reference I particularly like is "Testing Object Oriented Systems, Models, Patterns, and Tools", Robert V. Binder, c 2000 by Addison Wesley). There are several instances where Binder describes ways of designing test suites given rule sets, but points out that when the rule sets are not defined (and this has very often been the case in my personal experience), then the tester must develop them, or work to see that they are developed. (this was **not** a quote -- I'm paraphrasing from several areas in the book).

So, while I personally would love to spend the proportion of time doing automated test suite coding your description implies, Sarain, It has been my experience that I spend a larger portion of my effort working to get the behavior rules and static system attributes defined than truly I would prefer.

After all, if the system behaviors and attributes are truly undefined, it's not possible to devise conclusive and meaningful tests. Under such conditions, about the only conclusive tests one might be able to make are whether a system crashes or not (once the developers tell you how it **does** behave, irrespective of whether that's the 'right' behavior or not). Admittedly, my statements here could be skewed somewhat because just by the 'fortunes of war' perhaps, I have never been lucky enough to work on a project that had adequate analytical skill and bandwidth available, so the information that's come to me about the system is normally inadequate to begin writing tests from. Others with differing experiences no doubt may disagree with me.

For one working in quality assurance, however, then I'd say the analytical background becomes even more important than for the QC roles. A team doing software development (or doing anything else, for that matter) is after all just a type of system. You have to be able to watch the system in operation and see where it's going wrong so you can work through the (unfortunately) often political and contentious effort of getting the development system itself to change so as to remove the systemic causes of inefficiencies and/or anomalies in the final product. In such situations, analytical ability is key, though any actual development ability and experience is still important. It is important for different reasons, however, than for QC where **doing** development is important to writing automated tests and understanding code:

1) development experience/ability provides domain knowlege that permits more effective evaluation of the devlopment system itself (very important to detecting systemic problems), and

2) It provides some degree of credibility with the members comprising the system whose performance you're seeking to change (very important in successfully implementing the interventions required to fix the sources of systemic problems).

In practice, I would like to be able to dig more into the code of systems we develop, as you describe, Sarain, and to do more automated test writing; it just hasn't seemed to work out that way for me, at least -- whether I'm working to create or flesh-out requirements, or evaluating our team's processes, I seem to spend more of my bandwidth being an analyst than a developer.

cheers,

anonQAguy
Wednesday, November 13, 2002

If you look at the development process as a sequence of operations, two of which are development followed by testing/QA, then the logical thing is to figure out which step is the bottleneck and take actions to alleviate it.  If work accumulates behind the developers, then you need to do whatever you can to improve their situation, including taking QA tasks away.  If the testing team is the bottleneck, then you should help them by spending some of your development resources to do pre-screening.  You can afford to have some poor work passing through the non-bottlenecks - that time is recoverable.  Time spent on substandard work that passes through the bottleneck is lost forever.

Anonymous Jonah
Friday, November 15, 2002

*  Recent Topics

*  Fog Creek Home