Fog Creek Software
Discussion Board




QA Metrics

Last week, the QA lead in our division asked us to consider ways of determining QA metrics.  I've given it some thought, and here's my $0.02:

  Right now, our Bug Tracking software has the capability to
measure # of defects per thousand lines of Code (KLOC), and how many hours it took to fix each defect.  With a little work, we could also track when defects were injected into the software. (Requirements, Spec, Design, Code, or Mainenance.)  and who found the defect. (Development, Testing (Alpha), Customers (Beta), Testing (Post-
Release) and Customers Post-Release.)

  Since we rank defects in priority from 1 to 5, we could use these metrics to determine expectations for the next major project.  (As in "We're expecting to find 10 class 1 defects in the next release, and we've only found 3 so far ...")  This could provide visibility in the state of the software.  It could also provide details on what areas development could improve for the most benefit. (IE "We spent 200 more man-hours working on defects created in the specification than anywhere else in the process; we need to figure out how to write better specs.")  Finally, increases in the
metrics (like a greater % of defects found by QA) show progression, and these can be objectively tracked.

  Of course, defects per KLOC increase exponentially.  (In other words, version 2.0 of our software was 2x the size, 3x the staff, and 4x the defects.) We would have to have a great deal of "history" before our numbers would
be accurate; but, in the meantime "A number is better than no number."

  In addition, in a Visual C++ project, KLOC's aren't always the best way to measure effort.  We could use man-months or function points, but man-months probably isn't any better, and FPs take a bit of investment.

  Finally, It's possible that our QA lead could use Personal Software Process (PSP -  http://www.csis.gvsu.edu/~heusserm/GuestLectures/GuestLectures.htm) for his metrics, but that would also incur a bit of overhead.

  Your thoughts?  I appreciate your opinions ...

Matthew Heusser
Monday, February 11, 2002

One interesting way to measure the effectiveness of the QA process is to use defect injection. Have the development team introduce, say, ten known bugs into the next build of the software. Then pass that build to the QA team and wait for them to report on it. If they find 82 bugs, including 5 of the known bugs, then you can estimate that there are 77 unfound bugs still lurking and waiting to bite you. (77 of the 82 bugs found were not deliberately injected, and the QA team found 50% of known bugs).

The main problem with this technique is how confident you are that the distribution of injected bugs matches the distribution of unknown bugs.

Mike Gunderloy
Monday, February 11, 2002

"A number is better than no number."

That seems to me to be a harmful, fetishistic belief.  Frequently, a number without any context is considerably worse than no number.  Because the number tends to lend the appearance of some objectivity, people (especially non-techie types) may be inclined to believe the number, even in the face of countervailing, non-numeric evidence.  It would likely be useful to begin tracking these numbers, yes.  But you cannot use them as any sort of guide -- not even a rule-of-thumb one -- until you have some genuinely distributed data.

Acowymous Nonerd
Monday, February 11, 2002


Perhaps I should rephrase:

  "A series of numbers with an accuracy of +-30% tends to normalize around rough accuracy, and makes a better null hypothesis than the guru method."

  Better?

Matthew Heusser
Monday, February 11, 2002

From an SQA point of view, the important thing to capture here is the phase containment.  Which phase of the project was the defect injected and in which phase was it detected?

SQA is about preventing errors in the system.  SQA metrics are about finding out how well your process is actually working.  By measuring your defects against your errors (defect is found in phase, error escapes the phase) you can start to determine how effective your process is at catching things and you can evolve your process to compensate as necessary.  Why spend huge amounts of time and effort in the development phase if you know your problems are mainly injected in the requiements cycle?

SQC (Software Quality Control) is about ensuring the implemented quality of the system.  This is normally more the side where you deal with fault injection and other such methodologies to figure out the state of your product.

While both deal with quality, SQA and SQC have very different approaches and goals.  The first question you need to ask is what are you trying to figure out?

!
Monday, February 11, 2002

PSP is your least expensive route for defect tracking, detection, and quality improvement.  Unlike Function Points, you can implement as little or as much of the overall PSP methodology as you like.  There is also a lot of flexibility on how you implement PSP.  If you already have a defect detection software package in place, implementing PSP electronically could be quite painless.

Your most important factor in implementing any solution is not cost or infrastructure but buy-in.  With PSP or XP, you must have buy-in from all levels.  At least with FP, you can get away with only management buy-in and make it work.

Bottom line, if money (or the lack there of) is the major factor in which methodology to use, goes with PSP and implement a piece at a time.  Use the PSP as your foundation and later, when you have the money, you can use FP or XP with the PSP.

Patrick Dunlap
Tuesday, February 12, 2002

"Use the PSP as your foundation and later, when you have the money, you can use FP or XP with the PSP."

Somebody really likes TLAs.

-
Tuesday, February 12, 2002

ROTFLMFAO!

The Man
Tuesday, February 12, 2002

*  Recent Topics

*  Fog Creek Home