Fog Creek Software
Discussion Board

Buffer time Estimation

What is the percentage/amount of time needs to be kept as buffer time while doing estimation and on what basis/criteria?

Monday, March 22, 2004

If your estimates are based on very fine-grained tasks (each task is about 1 day in length) and you have enough experience to include EVERYTHING in the estimates (including vacation, holidays, sick days, integration time, "new features" time for the new features that are invented during development, time to go to stupid management meetings, time to interview people, etc.) then you're only going to need 10% buffer.

Rather than having buffer time which is just a way to wave your hands, you should have types of buffer and allocate time to each one based on priorities:

* Buffer for unexpected features we thought up during development
* Buffer for unexpected competitive responses needed because our competitor did something
* Buffer to allow code written by different developers to be integrated so it works together (depending on the experience of your team this can be 25% - 100%)
* Buffer to find and fix bugs during testing
* Buffer for non-development tasks that employees must perform, e.g. "1 day mandatory diversity training," "emergency company meeting," fire drills, birthday cake for the boss, etc. etc.
* Buffer because things took longer than estimated
* Buffer because things needed to be done for which no estimates had been provided

Break it down like this and you can track it carefully. If you're 80% done and you've only used 20% of the boss-birthday-cake budget, you can remove hours from that line and put them on something else more urgent.

Joel Spolsky
Fog Creek Software
Monday, March 22, 2004

Ever thought about making an app based on this system?

I know that it is fundamentally very simple, but then isnt useable implementations of simple ideas the best kind of software there is?
A webapp even.

Eric Debois
Monday, March 22, 2004

I was just reading the second edition of the Death March book, in which he claims there are "about 50" applications designed for doing this which will give you estimates within 10%.

The trouble is that those systems are completely garbage-in, garbage-out, and the actual effort of making a list and summing up the times of all the items in the list just doesn't justify software. Like I always said: Excel spreadsheet, 7 columns, maybe another column for the name of the assigned developer, and get on with your life. It takes real intelligence to break down the problem into small chunks and estimate those, and no software is going to do that for you.

Joel Spolsky
Fog Creek Software
Tuesday, March 23, 2004

I am glad you mentioned that Death March book. I was going to buy it last month at the book fair, but there wasn't an Indian reprint. It was available in the US edition for an equivalant of Rs.1073 or something.

Sathyaish Chakravarthy
Tuesday, March 23, 2004

CONSTRUX software (Steve McConnell's consulting company) has a free project estimator. It's pretty easy to use.

Mr. Analogy
Tuesday, March 23, 2004

The only disagreement I would have with the above is that you want to be working on getting rid of the buffer for "because things take longer than you estimated". You should be working on getting your developers to be factoring this into their own estimates. That way they are taking responsibility for their estimates and not relying on your buffer time to get them out of a hole.

For developers who haven't been through your estimation procedure much you may want to actually keep some time back for this, but possibly not under that name.

David Clayworth
Tuesday, March 23, 2004

Buffering is to protect deadlines.  No deadline, no buffer.

My experience with buffer time runs exactly the opposite of Joel's recommendation here: do *not* account for all that crap.  Not as aggregates, not in the estimates, and definitely not as seperate line items.  I'd be pulling my hair out in minutes!

Second, in my experience developers will never factor in non-effort items no matter how hard you try to get them to, so it's silly to try.  (Hell, when I'm wearing the developer hat I tend to forget to include things like documentation or pre-distribution mastrering in my estimates).

So, instead of forcing them to perform unnatural acts for your convenience, take effort estimates in Ideal Engineering Days (or Hours), and then multiply by a constant factor, determined by measurement.  In XP they call this "yesterday's weather".  If last month you finished tasks equal to X Ideal Days, assume that this month you'll probably get about the same.

If you have a deadline to meet, add *one* buffer, before that deadline.  If you have intermediate deadlines or milestones that other people are using to measure your performance, try to get rid of them.  If you can't, then you'll need to add a buffer before *each* deadline.  The sucky bit here is that you can end up needing *more* buffer in total than if you had just the one deadline.

(By the way, I don't mean to imply milestones or intermediate deliverables are *bad*, just that they're for information and should not be made into metrics, or else you get the same kind of dysfunctional behavior that comes from tracking developer productivity by bug counts!)

Phillip J. Eby
Tuesday, March 23, 2004

Can anyone tell me how Theory of Constraints a la Goldratt fits in all of this. I've read some parts of Goldratt's ideas but I'm not into it enough to be able to compare it to say, XP or Joel's approach. The way I understand it, Goldratt/XP advocate one buffer and ideal estimations. But they rely on trusting developers/workers to give their honest estimates and promising never to hang them for incorrect ones.

I'm more in the XP/Goldratt ideas than Joel's, but I'm just a believer, not a practitioner at the moment.

Anyone care to offer his views on that one?

Wednesday, March 24, 2004

The buffering that results from TOC-based solutions typically puts a single buffer in front of all "important" events.  In it's plainest form, the important events are anything that could cause the project to miss delivery, so the project's end date is buffered and there are buffers placed on each branch of work as it merges into the critical chain.

The value of using a buffering strategy that doesn't distinguish between this or that reason for deviation from plan is that you wind up with a smaller buffer, overall, to provide the same level of protection.  (This results from a property of statistical aggregation that is used to great benefit in a lot of situations where you aggregate things).  This also makes for a very easy system for monitoring and responding to so-called "special causes" of deviations from the project plan.

The downside to this is that when something does go wrong you have to dig around a bit to find out what happened.  If you allocate buffer over several categories, you pay for more buffer to get a given level of protection, but you make it a bit easier to determine where your problems are happening.  You also complicate your ability to control the project.

As an aside, Toyota chooses to spread its buffers out to make finding problems easier, even though they know it increases their inventory requirements, so there is a precedent for this kind of approach.

The TOC buffering strategies are useful, but they aren't meant to be used passively or in a vacuum.  You only get the complete benefit if you subordinate your behavior to the choice to use this buffering strategy.  This means cutting out multi-tasking, not fixing "due-dates" for each task item in the plan, fostering "relay-race" behavior in your developers, and so on.

So, for example, you wouldn't nail someone to the wall for missing the "due-date" for his task.  There wouldn't be a "due-date" even attached to the task.  What you would do is monitor buffer penetration.  If penetration reached the point where the buffer might be exhausted (thus jeopardizing the project), you'd go find out who was holding things up and figure out what to do to get things moving again.  There is still accountability, but its linked to successfully delivering the product as a whole, rather than delivering an individual task on an arbitrary date.

CCPM user
Wednesday, March 24, 2004


Larry Leach offers a chapter of his CCPM book at his homepage, dealing mainly with 'contingency' (speak: buffer) in PM. See

- Roland
Thursday, March 25, 2004

Ask developers about the time they will take to implement the features (T).  Multiply it by 2.5. If it is a very comlex algorithmic type project multiply it by 3.0.

In my experience thats a pretty good approximation for total project time.  Usually coding time is only 40% of total project time.

Nitin Bhide
Thursday, March 25, 2004

*  Recent Topics

*  Fog Creek Home