Fog Creek Software
Discussion Board




How to do testing?

I was just reading this article here:
"Top Five (Wrong) Reasons You Don't Have Testers"

http://www.joelonsoftware.com/articles/fog0000000067.html

Where Joel takes pains to describe how it is necessary to have a testing department.

I was just wondering if in fact Fog Creek has a testing department and how it works? I am in general interested in how testing is done at other organizations.

At my company, the software that my team develops is used internally and doe not have a GUI interface (unix shop here). Basically it runs an engineering simulation on data provided using a text input file, and outputs the results to other text files.

To know if the program is working as expected, you have to interpret the simulation results. Of course we have a set of scripts that helps to automate this process. Unfortunately we do not have dedicated testers.

At some point I would like to introduce a GUI to the simulator so that the results can be visualized. However, this would be a huge step and I'm not sure how one would start to test it. If the results were displayed on the screen in some sort of graphical format, how would you know it's correct?

Sorry if this seems a little oversimplified, I'm not really allowed to talk about what I do. Anyway, I have no idea how one would go about testing a Windows/Mac/Unix program with a GUI interface.

I would be indebted to anybody who could perhaps post a link to an article describing the state of the art of the testing process, or simply telling me how your company does it and what tools you use. There must exist some "best practices" of software testing. Perhaps somebody has a book recommendation?

Thanks for your time!

P.S. I've also read this thread: http://discuss.fogcreek.com/joelonsoftware/default.asp?cmd=show&ixPost=75543

Stimulate that Simulator
Thursday, July 29, 2004

i sent joel an email about what they have and he said 1 person is dedicated to testing.

Patrick
Thursday, July 29, 2004

2 of the books I would recommend are:

http://www.amazon.com/exec/obidos/tg/detail/-/0471358460/

and

http://www.amazon.com/exec/obidos/tg/detail/-/047135418X/

Peter
Thursday, July 29, 2004

One thing that's useful in cases like this is to do some simple baseline tests.  Manufacture a set of inputs, run your engine on them, and capture the outputs.  Inspect the outputs by hand to see that they're right (or close enough).  Write a test that runs with the same inputs and compares the output to the originally-captured output.  You're not really testing correctness at this point, but this will flag any changes to your code that would change the output.  Depending on the complexity of the transformation, you'll have to deal with false failures (where you'll have to go through the sanity check and re-dump some new "expected" outputs), but it's something.

As bugs crop up, you can either add the conditions necessary to reproduce them to your inputs or add new sets of inputs that reproduce the bugs.

I did this on a past system.  It was a pain, but the transformation process was very complex, and this was better than having no automated testing at all.

(You can apply a similar approach for the visualization side of things, when you get there.)

schmoe
Thursday, July 29, 2004

If the math is reversible, writing a proggie that does it backwards can help you catch round off errors and stuff like that.

Eric Debois
Thursday, July 29, 2004

> To know if the program is working as expected, you have to interpret the simulation results

A _person_ has to do this?


Friday, July 30, 2004

Sorry, I should have added something. Sounds remarkably like where I work (not regarding tests, but generally), where the engineers do lots of things manually because... well, I don't know actually. I have come up with numerous ideas as to why but none of them are complimentary.


Friday, July 30, 2004

*  Recent Topics

*  Fog Creek Home