Fog Creek Software
Discussion Board




Any experience with post-test analysis?

My team has just wrapped up a complex web-app (Servlets/JSP/XML/HTML) for processing new insurance policies online with a specific carrier. I've got the task of rounding up the root cause for our 400+ closed bugs. The main goal is to devise a mitigation strategy based on the source of most "bugs". Mgt. is behind this exercise because a cursory glance at most "bugs" reveals a change in spec or client desires/goals. Basically, the client is wondering: "Did we really collectively change our minds 400 times? or did these guys botch a lot of code?"

My first swing is to get the team to answer the question "Why does this bug exist?" (change in spec, immature code, new requirement) and hopefully some buckets will emerge. Also, we've already identified where the bugs occurred to show some hot spots.

Any experience with this type of exercise, comments, questions or suggestions would be appreciated and promptly answered.

Post-Mortem Pete
Friday, March 14, 2003


The date the bug was reported is import.. so
is the check in time of the code that caused
the bug. When was the first date it was
checked it, when was the day it was fixed
(after the bug was identified).

I know.. usually you check in at the end of
the day or the end of the week so you get a
lot of bugs solved at a time. So that's a
revision control usage issue. But assuming
you have concurrent access prevention and
you check things back in pretty frequently
you should have a good idea when new
features requested that day by a client was
the cause of a bug.
When you correlate the two (bug count and
bug id with the check-in date of the program
code) you can quickly tell whether client
request was introducing bugs on an on-going
basis or "all at once" as you put it.

-- David

Li-fan Chen
Friday, March 14, 2003

I'm not at all surprised if most "bugs" are really feature requests because the client changed their mind; this is natural. Software is exploration; you don't know where you're going until you get there, and you find a million reasons to adjust the path a little bit on the way. See The Iceberg Secret: The Client Doesn't Know What They Want. Stop Expecting The Client to Know What They Want.

Anyway, post mortems are fine if you use them to learn how to make better estimates in the future. Next time you'll know that in a project of size X, the number of small course corrections you need to make is Y ...

Joel Spolsky
Friday, March 14, 2003

The hot spots you have identified are great starting points for your investigation.  One of the rules from "The Art of Software Testing" states that "The probability of the existence of more errors in a section of a program is proportional to the number of errors already found in that section."  As you suspect, there is a good probability of correlation between low-level errors and higher-level requirements "errors".

To quote from "The Art of Software Testing" again (which is quoting from I.M. Copi's "Introduction to Logic"), "A problem may be characterized as a fact or group of facts for which we have no acceptable explanation, which seem unusual, or which fail to fit in with our expectations or preconceptions.  It should be obvious that -some- prior beliefs are required if anything is to appear problematic.  If there are no expectations, there can be no surprises."  So bugs can be "created" in some cases just by changing expectations, as you noted.

It might be helpful to take a top-down approach in addition to the bottom-up approach of "Why does this bug exist?" and see if you can meet in the middle.  In other words, start with a list of requirements, and try to identify higher-level course corrections, and then think through the low-level impact of these.  If you don't have this kind of higher-level track record, you can most likely recreate it using the "Why does this bug exist?" approach, but if you have a higher-level audit trail, it should help you focus your efforts more.

In a sense, you are trying to recreate the results of regression testing the user acceptance tests, if such a thing were possible.  :-)

ODN
Friday, March 14, 2003

Thanks for your replies, so far so good.

ODN: It sounds like requirements traceability and change control tracking is your way to start at a higher level. Correct? Also my brain almost deadlocked on this one:

"In a sense, you are trying to recreate the results of regression testing the user acceptance tests, if such a thing were possible."

..but I see your point.

Joel: It sounds like your mitigation strategy is to know that revision is natural and will happen, but accurately estimate to what degree based on the size of the proj. Makes perfect sense to me. But let's say mgt. is hoping for some process tweaks on our side that help us reduce the late-in-the-game 'discoveries', are there any on the top of your brain? Should I just read a book on XProgramming and get it over with? ;)


Thanks,
Pete

Post-Mortem Pete
Friday, March 14, 2003

No experience as far as the original question, but in response to how to avoid discoveries late in the development process...

I usually write both a specificaiton, and, when I think I'm done, I write a schedule, consisting of a list of things I need to do to meet the spec. As I'm writing the schedule, I always realize that there are things in the spec that won't work.

Extrapolating from that experience, anything that causes you to think differently about the problem will help. Specs, schedules, mock up screens, user stories, drawings, presentations...

Big B
Friday, March 14, 2003

Wouldn't a better use of the iceberg metaphor be "The client does know what he wants, but nine-tenths of it is below the surface"?

Stephen Jones
Sunday, March 16, 2003

*  Recent Topics

*  Fog Creek Home