Fog Creek Software
Discussion Board

Bug Triage

Hi folks,

I'm trying to come up with a metric that we can use to accurately determine which bugs we should fix.

In the past, we've used things like 'status' - just one field that determines the severity alone. The trouble with this was that what was high priority 3 months before we ship isn't considered high priority by the dev team 2 weeks out - soon every bug ends up getting raised as a critical fix.

I figure if we can get a simple weighted equation that covers all the things we need to know, we can discuss the bugs raised each day and give them a score. Shouldn't take too long, and would allow the QA guys to start "setting the quality bar" based on a realistic metric, rather than trying to fix 100% of the bugs and being dissapointed everytime we release...

So far, I've got:

  Severity - How bad is the bug?

  Embarrasement- Would a defect of this type be perceived as symbolic of a more general quality problem?

  Extent - What percentage of our customers are likely to encounter this bug?

  Potential for destabilization - is fixing this bug going to break a whole bunch of other things?

  Testing Impact - Is there enough time to adequatley perform the regression tests to verify this bug fix?

  Origin - Has this bug been raised by a customer?

Have you learned folk got any suggestions as to other factors that affect which bugs to fix? Is this a crazy idea?

Gordon Taylor
Monday, November 25, 2002

You also need to consider how long it will take to fix each bug, so you can allocate your time in the most productive manner.

Though you may consider several factors, you need to prioritize the order in which you'll fix the bugs . Then you can work down the list, dealing with the most important and accessible problems first.

Monday, November 25, 2002

I guess my question would be if the bug was considered high priority 3 months ago, then why wasn't it fixed 3 months ago?  I can see how low priority bug fixes can be delayed but not the high priority ones.

At any rate, other things that can be taken into consideration would be:

* is there a workaround available?  ones without workarounds should have a higher priority (how much higher would be up to you)

* how often would the customer encounter this problem?  are they going to see this problem every time they run the software or will they see it once or twice a year?

* can the bug be documented as a limitation instead?

hope that's helpful.

Ron E.
Monday, November 25, 2002


The high priority bug from 3 months ago would be fixed - it's just that if the same bug came up under different circumstances (closer to the shipdate) it would probably be swamped by a bunch of other 'High Priority' Bugs, and our QA guys would then upgrade it to 'Critical'. The ratings system ends up devalued. I figure if we discuss all the bugs, we might be able eliminate some of this..

Thanks for the workaround one - good point.

Gordon Taylor
Tuesday, November 26, 2002

I recommend two ratings, but I like to keep them simple and concrete:

severity and priority, expressed as follows.

severity (i.e. significance of the symptoms)
    inconvenience-minor error
    malfunction-no workaround (default)
    crash/data missing/corruption

obviously, these go from lesser to greater severity.

For priority (i.e. sequence of correction), I recommend:
    Medium (default)

No big deal here except that critical is intended to be preemptive -- you pretty much drop what you're doing and get those fixed right now.

Personally, the systems I've seen not work too well are those that are:
1) based on numbers/letters for their labels alone
2) have a lot of gradations to them (I think > 5 is probably too many)
3) combined systems that tried to bundle severity and priority together.

I've used the ones I listed above in a couple of places and they've worked pretty well. Most defect tracking systems I've used let me specify the labels for priority and severity, and where the decision has been mine, I've chosen not to use a defect tracking system that didn't support these two metrics and the ability to customize the labels used.

This system seems to strike a reasonable balance between flexibility and simplicity, at least in my experience.

HTH, Cheers,

Tuesday, November 26, 2002

Oh - forgot to mention...if the defect tracking system lets me designate a default value for severity and priority, I set it to the ones indicated in my previous post.

Tuesday, November 26, 2002

So anonQAGuy, how do you decide which bugs don't get fixed?

This might seem too complicated, but I was thinking of  a meeting each day during the stabilizing phase, where the Development Lead, the PM and the QA Lead triage all the new bugs. They do this by assigning ratings for each significant factor discussed above, and then add them all up, weight them accordingly and come up with a number - say out of 100. We then have a 'criticality' factor, and we can decide which bugs get fixed first.

We can also decide where to set the quality bar - we could say that we can tolerate shipping with bugs that score less than 35...

This is something that I've just dreamed up in my head - I really can't decide if it's a good idea or a completely retarded one...

Gordon Taylor
Tuesday, November 26, 2002

What exactly are all your developers doing for the other 2.5 months when they're not fixing these high priority bugs?  Hopefully not writing new code, right?

It sounds like you are trying to apply a technical fix (in the form of a complex bug rating scheme) to a social problem (getting everyone to agree on which bugs to fix).

Foolieh Jordan
Tuesday, November 26, 2002

This sounds to me suspiciously like too many people are arguing over what's really "important."  This may not be the case here, but if it is, you might want to consider setting up only one person (or a set of people) who can determine the priority of a bug.  Then they can set that one priority field.

It's not up to developers to decide a bug's priority.  They can provide input, which may change the decision maker's mind about the bug's priority, but the responsibility should still lie with the decision maker.

Brent P. Newhall
Tuesday, November 26, 2002

Foolieh said:

"It sounds like you are trying to apply a technical fix (in the form of a complex bug rating scheme) to a social problem (getting everyone to agree on which bugs to fix). "

You know - I think you're right. This problem is definitely social, and is more a symptom of overall team dysfunction, so it doesn't make a lot of sense to address it with a technical solution... I guess I've been looking for the silver technology bullet, when what I really need to do is focus on improving the communication between the two teams...

I think rather than trying to baffle the QA team with bugfixing complexity, I 'm just going to talk to them more.
How low tech...


Gordon Taylor
Tuesday, November 26, 2002

What we started doing is prioritizing bugs not by some generic "severity" but by milestone.

A low pri bug might be "fix by release", where a high pri bug would be "fix by <whatever the next milestone is>". This of course only works if you have fairly short increments. These don't have to be full scale releases, of course; internal milestones work fine.

We started out with the usual "High, medium, low" ratings, but ended up spending many hours recategorizing the severity of bugs as time went on. The fix-by dates were a lot more stable.

Chris Tavares
Tuesday, November 26, 2002

*  Recent Topics

*  Fog Creek Home