Fog Creek Software
Discussion Board




Too much churn to fix bugs

Recently I cleaned out my inbox and am making an effort to sort my work e-mails better instead of just letting them collect into one big glob.
I noticed that on average it is taking us 6-8 e-mails to fix a bug with our developers (including myself).
This is due to what I call "Bug Denial Mode". Each bug to the developer has to go through several steps.

1. Its by design and this is why.
2. I can't reproduce it.
3. Its the other developers problem.
4. It is a minor bug and we don't need to fix it.
5. We don't have time to fix it.
6. I can't fix it without major restructuring.
7. Fixed (estimated time 1-2 minutes)

I think this relates to programming being somewhat of an artform like a painting. After the painting is done no one likes it when someone else comes in and complains about how the hand is drawn.

So my question is how can I address this issue. I sent one e-mail some time ago pointing out (hopefully politely) that we seemed to spend more time arguing over the bug and if the developer just fixed the damn thing the first time we would save ourselves some major headaches.

Jon Kenoyer
Wednesday, July 30, 2003

Although I can sympathize with you,  you're operating on the assumption that you are always right and the programmer is always wrong.  Sometimes it's not always that cut and dried.

Maytag Repairman
Wednesday, July 30, 2003

I can relate to being in the "Denial Mode" also. I dont think its an unwillingness to work on fixing bugs; rather I think its a delay that is there so the programmer have time to think about the most efficient way to fix the problem at hand.

Obvious off-by-one bugs or when I press 'B' i get 'A' type bugs usually gets fixed pretty quickly.

In my experience the most common bugs that gets in the deny, deny, deny-pile is a result of changed business rules or business needs. This is to say that what is now a bug was not always a bug.

Business people says "Hey I get the error message here saying "Error handling unexpected stuff", and we need to be able to handle this. Fix this bug by no later than tomorrow since I have already told my boss we would have no problems handling this. After all it cant be hard, just change the database table to handle 4 new fields".

The above is a denial-pile bug, because often times the business people rushes a solution that the programmer needs to verify all aspects of. Otherwise if a bug fix is rushed then 3 days later when another bug shows its face its the programmers fault.

So, I think its just a matter of minimizing unneccesary workload on the part of the programmer.

I think business people with in-house development crews gets used to just throw the ball over to the IT-staff and call it a bug or minor development or whatever.

This is a communication problem for the most part. Even when you do custom stuff (non-shrinkwrap) you need to have well defined release cycles, that your business people understands. That will result in

a) better acceptance testing on the part of the client
b) less rushing of bugfix and enhancement requests
c) more careful planning of features

The above will lead to happier programmers and less of the denial-mode answers.

just my 2 cents,

Patrik
Wednesday, July 30, 2003

Right now someone can go into denial mode because everyone can close the bug (i.e. ignore the email or give some justification).  What if you implemented a bug tracker where only the person who opened the bug could close it, and it remained assigned to a programmer until then.

When people start being held to account they will change their method of operation and start fixing that which needs to get fixed.

Lou
Wednesday, July 30, 2003

We do use a web based bug tracking system and it does help. I still see the same cycle with the bugs that are logged (which they all should be!).

Jon Kenoyer
Wednesday, July 30, 2003

You need to stop managing by committee.

It sounds like no one is responsible for prioritizing bugs and deciding which get fixed and when.

Decide who has the most understanding of the customers and the product (typically the product manager), and give that person the final authority to determine what gets fixed and when.

There should be no discussion about whether or not to fix a bug--the person in charge makes a decision and it's either fixed or it's decided that it can be in the final product because the cost of fixing it would be the cost of a user stumbling across is.  (See Joel's Hard Ass Bug Fixing article: http://www.joelonsoftware.com/articles/fog0000000014.html )

As far as addressing your scenarios, here's how we address them, which seems to work very well:
1. Its by design and this is why.
- This should be pretty straightforward to determine--it's either behaving as the requirements specify or it's not. Before entering a bug, the requirements should be checked to see if the product is behaving as it should. If the requirements are unclear, a bug should be entered for the analyst, not for the programmer.

2. I can't reproduce it.
- Whoever entered the bug should walk over to the coder's workstation and walk them through how to reproduce the bug. If the bug cannot be reproduced then, it should be assigned back to whoever entered it, that person should try reproducing it with the next daily build, and close it unreproducible if they can't reproduce it then, or assign it back to the programmer with more detailed instructions.

3. Its the other developers problem.
- Assign the bug to the other developer with a description of where the bug is in their code.

4. It is a minor bug and we don't need to fix it.
- The product manager decides this and the programmer never sees it if it's not worth fixing. The programmer is not qualified to make this decision; only someone with detailed customer knowledge can decide how bad it would be if a customer stumbled across a particular bug.

5. We don't have time to fix it.
- Product manager's responsibility. You can fix it now, or you can fix it later at 10 to 1,000 times the cost.

6. I can't fix it without major restructuring.
- Product manager needs to get involved, along with the lead architect.  Sounds like a fundamental design flaw that needs to be immediately addressed, or you need to cut some functionality.

7. Fixed (estimated time 1-2 minutes)
- See how easy that was?

Dave
Wednesday, July 30, 2003

By the way, you also need to IMMEDIATELY purchase FogBugz and get better control over your process. Managing bug fixing by e-mail leaves you no audit trail and no way to capture knowledge about each bug.

Visit this page and whip out your credit card: http://www.fogcreek.com/FogBUGZ/

Note: I am in no way affiliated with Fog Creek Software.

Dave
Wednesday, July 30, 2003

"2. I can't reproduce it.
- Whoever entered the bug should walk over to the coder's workstation and walk them through how to reproduce the bug. If the bug cannot be reproduced then, it should be assigned back to whoever entered it, that person should try reproducing it with the next daily build, and close it unreproducible if they can't reproduce it then, or assign it back to the programmer with more detailed instructions."

WRONG!!!  The ease or even the ability to reproduce a bug should never be the decision point to closing a bug.  While I agree that reproducing a bug makes it easier to fix, not reproducing it doesn't mean it can be ignored.

This process will result in a massive amount of bugs following implementation.  A billing system was implemented in this way, and was unable to produce a customer bill for a three week period - totally unacceptable.

Joe AA
Wednesday, July 30, 2003

Sorry Joe, I have to disagree.  While we all know they do, the approach in an IT center has to be " Non-reproducable problems do not exist. " [ A co-worker quote]

Here is why:
Developer - They did not find a bug the first time, and detection through inspection is a very slow, and near impossible process
Business - A developer tracking down bugs that may not really exist is a terrible waste of resources.  Other activities are not getting done, and sometimes, it really is the person testing that is broken, not the code.

Now if a bug that is experienced by many people, at seemingly random intervals, someone needs to put these together and find commonality.    Until it can be recreated, or is reported by some "X" number of people, you are going to be spending resources with a bad ROI.

BigRoy
Wednesday, July 30, 2003

Non-reproducable problems do sometimes exist: for example, errors might happen on a live (customer) site that don't happen in your lab, because the customer is doing things (strange event sequence, unusual load, etc.) which you're not.

An acceptable answer might be "I've looked at the log files you sent me and I accept that there appears to be a problem with the software's behaviour. Unfortunately I haven't yet been able to determine the cause of the problem ... I've changed our software so that it puts more information into the log file ... please run this new software: this won't fix the problem, but the extra information in the log file should be enough to let be find and fix the problem if and when this happens the next time."

Christopher Wells
Wednesday, July 30, 2003

"I can't reproduce it.
- Whoever entered the bug should walk over to the coder's workstation and walk them through how to reproduce the bug"

Wrong.

First of all, QA shouldn't be pre-emptively bothering coders. That accomplishes nothing but create enmity between the two. At Camel the QA people assumed they were the #1 priority - one guy would even call a dev in another part of the office and say "I've got the bug on my screen now - come look". I was *dreading* entering the QA phase of our project if I couldn't get management to deal with that.

Secondly, every bug should be submitted with details of what the user was doing at the time, with screenshots if possible. I've found a LOT of "cannot reproduce" bugs are the result of either the programmer misunderstanding the bug reports ("Oh, that was in the reports form? No wonder..."), or the QA person misunderstanding the application ("The subtotal doesn't add up right" "It's a subtotal - it's not supposed to include tax & shipping"), etc.

Philo
Wednesday, July 30, 2003

Typically, when I've had this problem with a bug (back and forth with th developer) it's because the bug report was poorly written. It either didn't describe the bug correctly, didn't contain specific repro steps, didn't detail why the tester thinks the bug should be fixed, etc... If you're going back and forth with the developer on every bug, perhaps you should look and see if there's a trend in the types of questions the developer is asking, and make sure you answer those questions in the bug report before he asks them.

And if the bug report really is complete and you're still having this problem, you've got a dev that is either really lazy, really overworked, or both, and he's using the back and forth as a delaying tactic.

MarkF
Wednesday, July 30, 2003

Let me clarify the point about the reproducing the bug problem.  I wrongly assumed that everyone in the world is using our standard bug entry template.  (A slightly large assumption, I know.  *grin*).  Here it is with some sample data:

Build
3.0.123.13404

Title
Total cost isn't updated when quantity is modified on an existing line item

Steps to reproduce:
1. Go to the order entry details screen
2. Go to an existing order that has at least one item with a nonzero quantity and a nonzero unit cost.
3. Modify the quantity to a different value.
4. Force the focus off the quantity text box.

Result: Total cost remains unchanged

Expected Result: Total cost is updated to reflect Quantity * Unit Cost

Requirement ID:  UIR1.32.312

Comments: I went back to build 3.0.123.11230 and it works in that build.

~~~~~~~~~~~
The assumption is that the QA person already DID provide very detailed instructions to reproduce the bug.

Also, the QA person isn't the one going over to the programmer's desk unannounced.  The programmer calls the QA person over when they cannot reproduce it.

So, here are the caveats to my original post:
1. The bug must have step-by-step instructions on how to reproduce the bug, as well as the actual and expected results.
2. The programmer must attempt to reproduce the bug with the steps outlined.
3. Instead of just saying, "Ha!  This must have been fixed already," and closing the bug, the programmer must ask the QA person to come attempt to reproduce it on the developer's machine. This prevents a lot of churn caused simply by the QA person perhaps not explaining the steps QUITE clearly enough.

Sorry about that.

Dave
Wednesday, July 30, 2003

Joe AA:

"A billing system was implemented in this way, and was unable to produce a customer bill for a three week period - totally unacceptable. "

Um, if a system is unable to fulfill a core, fundamental piece of required functionality, it seems difficult to believe the problem wasn't reproducible.  How was this possible?

Dave
Wednesday, July 30, 2003

I just re-read you post, Philo.  A couple of other points:

1. For any bugs that are visual-related (i.e., painting, alignment of controls, formatting of displayed information, etc.) we do include a screen shot.  (I LOVE Fogbugz' ability to display these in-line in the bug details.  AWESOME feature.)

2. Every bug must have a requirement ID. This forces the QA person to actually look at the requirements to see, for example, that subtotals are not supposed to include tax and shipping. This prevents a lot of churn due to someone thinking that there's a bug, but it's really behaving exactly as the requirements specify.

If the QA person can't find a requirement ID, but the product obviously appears to be functioning incorrectly, the bug is assigned to the analyst, who then updates the requirements as necessary.

This is because if it's not in the requirements, it's not the programmer's fault, it's the analyst's fault.  Until the requirements say what the product SHOULD do, and it's verified that that the product IS doing something different, it doesn't get to the programmers.

Dave
Wednesday, July 30, 2003

Given all the other contributions, each of which contain truth, there is another truth which can lie behind resistance to accepting bugs and that's Ego.

If bug reports are felt to be personal insults or slights on the professionalism or quality of the the code produced then its not surprising that it takes a while to get bugs accepted.

Personally I always accept that the reporter of a bug has seen or experienced the behaviour and I'm rarely surprised at the ingenuity of the bugs I can create.

But then I'm perfect in all respects.

Simon Lucy
Wednesday, July 30, 2003

Requiring testers to point back to some formal, documented requirement in their bugs is not a good idea. Let's face it: specs are never that good or up to date, and in my expereince most bugs do not have an unambiguous answer in the spec (even when our specs have been very very good.)

I think the right approach to stream-lining your bug fixing is:

1. Have a regular (once or twice a week) bug triage meeting where key players in the project go over the new bugs and decide which ones to fix, which ones to punt, which need more investigation, and so on.

2. As others have said, use a good bug tracking system. You are toast without this on anything other than a trivial project.

Developers should focus on fixing bugs that have been accepted, and should never try to punt bugs that have not gone through the triage process. This is WAY more efficient than trying to do it in email, and it also ensures that everyone with a say gets a chance to make their case one way or the other.

It helps to have someone who owns the decision at the end of the day and whose ass is on the line down the road if the wrong call gets made.

Finally, keep the triage meeting as informal as possible and don't over-do the process: lots of red-tape will make things less efficient, not more so.

Mike Treit
Wednesday, July 30, 2003

Mike:

I respectfully disagree.

Although there certainly are many bugs that are obvious and need no reference to the requirements (Launch program, click button xyz, application crashes), avoiding the inclusion of a specific requirement identifier simply introduces more time and effort into the process.

The programmer is going to need to know what the CORRECT behavior needs to be. Now, there are two choices the QA person has:

1. Refer to a requirement.
2. Completely describe the required behavior in the bug.

Which is more efficient?

(Lets assume for a moment that the requirement is more involved than simply something like "The total cost shall display the product of the quantity and the unit cost.")

Dave
Wednesday, July 30, 2003

"...specs are never that good or up to date."

Question: If your specs aren't very good or up to date, how does your QA team test your software?

Dave
Wednesday, July 30, 2003

I think it is much more efficient to simply put the desired behavior in the bug. We do this as part of our triage process. Now, the developer can see all of the information in one place (the bug report) rather than jumping between the bug and one or more external documents.

If the bug requires a major change, modification of the feature set, etc...then the spec may need to be updated. But most bugs do not fall into that category.

How do we test without perfect specs? That's easy: specs just define the base-line expected behavior, but do not cover every possible behavior in every possible condition. Good testers don't test to the spec except as a somewhat small portion of their testing - instead, they focus on the actual quality of the product and find problems that the writer of the spec most likely never conceived of. After all, isn't it possible to write a piece of software that follows the spec to perfection, but still doesn't make customers happy?

Truth be told, I wouldn't WANT a spec that rigidly defines every possible behavior - it would not be flexible enough to allow evolving design, and it would also be huge.

Mike Treit
Wednesday, July 30, 2003

"Um, if a system is unable to fulfill a core, fundamental piece of required functionality, it seems difficult to believe the problem wasn't reproducible.  How was this possible?"

Easy... lots of bugs, developers who couldn't debug unless the bug was capable of being displayed in a debugger, pressure to close bugs that couldn't be reproduced since the developers couldn't fix them otherwise, ran out of time so testing was declared complete, and then implementation.

Piece of cake.

Joe AA
Wednesday, July 30, 2003

The team is arguing amongst itself.  So, this looks like a culture problem to me.

The best way to solve culture problems is to improve the culture, by getting people together and working cohesively.

I agree that testers *shouldn't* have to bug (heh) developers and walk them through each bug.  However, this is a broken team.  This isn't an ideal situation.

The e-mails bouncing back and forth must consume a rather large amount of time.  How long does a six to eight e-mail conversation take?  A day or two?  Multiply that by every bug out there, and that's going to add up.

So, keeping all this in mind, here's what I suggest:  Get the developers and the testers together.  Get them to talk about the bugs.  Get the testers to explain the bugs.  This doesn't necessarily require a walkthrough; just a brief summary of the bug.  You can pull everyone together for a meeting, or have each tester meet with the developer responsible for the bug, or what-have-you.

But this won't solve the underlying problem.  How can you improve the team's cohesiveness?

The Pedant, Brent P. Newhall
Wednesday, July 30, 2003

It is not clear from the post how many originally reported bugs end up being a 6-8 e-mail exchange. It might be that the process helps to clarify the problem and weed out false alarms.

However, if pretty much all of the bug e-mails end up with 6-8 message tails, then it probably makes sense to demo the problem clearly such that the denial is avoided. After all, seeing is believing ...






Mr Curiousity
Wednesday, July 30, 2003

Has anybody considered part of the issue might be communication rather than process or management

I think every developer has been in a situation where they fixed a bug, thinking it was the bug requested, and actually fixed some other bug... or not understood the bug report ...  or many other variations.

Another part of the reason why some of the bugs may not be getting resolved in the desired fashion, is the developer's don't understand what is important in the same way as management. This is not necessarily deliberate, just because nobody explained to them, or they didn't get the point being made, or why something was important to the users or the company, etc.

S. Tanna
Wednesday, July 30, 2003

My feelings go along with Brent on this one. It looks to me as though your team has developed a mentality that stops it getting to grips with bugs.

Lots of developers hate debugging, for lots of different reasons even though it is a fact of developer life. There are some things that exaggerate this. The main one I've found is that if the team is under pressure to deliver new code in a timescale they think is too short, debugging will suffer. Also if developers feel they are being judged (or judging themselves) by the amount of new code they write, then debugging is going to be second priority at best. Eventually it gets to a stage where a developer would rather go through all the stages of argument above than fix a bug, because if they argue enough they think they won't be assigned bugs to fix.

If this is true, your main task is to get your developers to see fixing bugs as a priority (it's the managers job to decide how high a priority that should be, based on the required quality of the code, current quality etc.). If I might make some suggestions....

1) Assign your developers some bugs to fix, and a certain amount of time to do it. Make it clear that they are going to be spending a certain number of days on bug fixing. Get them to come to you when they are all dealt with. For any that the class as 'unreproducible' or 'feature', get them to give reasons and challenge them gently.

2) start making it clear by your words and actions that fixing bugs is a priority. Give praise in team meetings to people who have fixed particularly nasty bugs. Talk about the bugs that are still to be fixed.

3) Have some team targets for bugs remaining, e.g. "We need to have less than ten class three bugs by the end of the month".

4) Work on bugs yourself.

You need the team to realise that all the delaying tactics they use aren't going to stop them having to fix them. Of course that will take time.

David Clayworth
Thursday, July 31, 2003


Of course... working on bugs is only part of the issue. 

One of the things I think any automated method of tracking bugs encourages, is the thinking that each bug becomes an entity into itself... a thing that needs to be fixed... a unit of work that can be assigned.  In this way, bug tracking evolves into a work management system.

When this evolution is complete, then everyone forgets that bugs are symptoms of a problem... not the problem in itself.  This can lead to a bandaid, patchy approach that creates more bugs.

Joe AA
Thursday, July 31, 2003

*  Recent Topics

*  Fog Creek Home