Fog Creek Software
Discussion Board

Critical Chain, ToC & Project Management.

Hey Guys,

I am assuming that a lot of you have read The Goal, Critical Chain.

Have you incoporated any of this in managing software projects? How has it worked out? Let me know your obesrvations, success.


Prakash S
Friday, March 21, 2003

I've tried Critical Chain on 1 project so far, but I can't make any conclusions. Unfortunately, because of an impossible deadline, the task durations got squeezed down to less than the 50% guideline. Not only was there no padding in each task, the durations were basically impossible. So, we were late on almost every task.  Also, I was not allowed to put in a big enough project buffer.

I still believe it can work because, we were much less late than we've been on other projects.

I also found myself concentrating on the critical chain. I had 2 resources that were overscheduled in a way that I would have missed if I had just looked at the critical path. I optimized those 2 resources as much as possible. So, I think being aware of the critical chain improved my project management to some extent.

Overall, I think it works but I need more projects and more control over the schedule.

Friday, March 21, 2003

yeah taht is true, you need complete control to assign buffers, etc.

Prakash S
Friday, March 21, 2003

One thing the buffers buy you is a mechanism for evaluating the schedule risk a certain "due date" imposes on you.  A simple way to do this is to use the aggressive and comfortable estimates to generate a statistical distribution for the length of the chain.  This distribution will show you the confidence associated with a specific chain duration.

The way to use this is to use the risk level as a bargaining chip with the folks that like to assign arbitrary due dates.  Give them a list showing risk percentiles in column one and projected finish dates in the second column and tell them to circle the one that achieves their objectives.

They tell you what date they want, you tell them what risk they are assuming.  After the decision is made make sure to publish both numbers to all stakeholders as a part of your periodic summary information.  Publicity of this sort is very effective at curbing schedule exuberance in managers since none of them wants to be known as being a reckless cowboy.

As an aside, learning how to properly identify and analyze risk is probably your single best tool for fighting against arbitrary management decisions.  When they adjust project parameters like budget, scope, schedule, quality, etc, they are implicitly adjusting project risk.  The problem is that this side of things is rarely made explicit.  When you make it explicit, you turn a one-sided argument "Who doesn't want to make the schedule shorter???" into a tradeoff "I'd like a shorter schedule, but I can't see assuming that much risk..."

To answer the original question, I've used the approach successfully for about four years, and have nothing but good things to say about it.  The biggest problem is the lack of tool support.  My team has had to build our own toolset to make this stuff tractable.

CCPM user
Friday, March 21, 2003

(sorry for the public post)

CCPM, may I quote you?

Reginald Braithwaite-Lee
Friday, March 21, 2003

Reginald:  certainly.

By the way, another great place to ask questions like this is on the Critical Chain group at yahoo:

There's several folks on there that have been using this approach for a while and are typically happy to share their experiences and observations.

CCPM user
Friday, March 21, 2003

CCPM user:

"As an aside, learning how to properly identify and analyze risk is probably your single best tool for fighting against arbitrary management decisions.  "

Do you have any advice/ tips/ links on how to go about this?

You make some very interesting points.


Prakash S
Friday, March 21, 2003

If you are interested in identifying and analyzing risk, the best place to begin is to look at the simple and the obvious.  If you examine the way scheduling is typically done (using Gantt charts, etc) you'll see that the key analysis element is the task, and that the task obeys the following equation:

t = (s * e) / r


t = schedule time or calendar time (hours)
s = scope, number of features (units)
e = effort per feature (resource-hours/unit)
r = resources

Let's assume that we're going to analyze a single task, first using traditional methods and then again keeping an eye out for schedule risk.

The traditional approach would just evaluate the equation above like this:

s = 3 units
e = 3 resource-hours/unit
r = 0.5 resource (a case where you are on the project half time - very common).

If we plug these numbers into the equation, we get (3 * 3)/0.5 = 18 hours of schedule time to get the job done.

Now let's look at it with an eye to risk.  Risk is really the cost of uncertainty in a lot of ways, so let's add some uncertainty.  Let's now assume:

s = 2-4 units
e = 2-4 resource-hours/unit
r = 0.25 - 0.75 resource

This keeps the average case the same as the traditional approach.  We can use just this much information to compute a back of the envelope best case and worst case as:

best t = (2 * 2) / 0.75 = 5.3 hours
worst t = (4 * 4) / 0.25 = 64 hours

So, just for a rough approximation, we've decided that this single task can take from 5 hours to 64 hours, with just a small amount of uncertainty.  Computing the average case yields the same 18 hours as before.  You can see how a little uncertainty can really change the picture when you acknowledge it.  You can apply similar logic to chains of tasks and then to small project plans.

This kind of analysis is useful for convincing yourself that it's important not to ignore these tiny risks, but the results are not realistic.  The next step is to refine the analysis so that the results are useful for decision making.  The best way to do this is to either develop or purchase a good monte carlo simulation package. My favorite is Crystal Ball by Decisioneering - no affiliation. Another good one is @Risk by palisade software.  You can easily develop a basic one using Excel, though, and this is even useful in understanding how these things work. 

These packages allow you to easily set up models like the one above, but provide a much more sophisticated (and realistic) analysis capability.  When you get good with these tools, it doesn't take very long, you will have a potent weapon to use in turning one sided arguments to your advantage.  A nice side effect of this kind of approach is  that these are the same kinds of analyses that the finance guys do, so it's easy to get them on your side.  Another nice thing that sometimes happens is that the project managers get enthralled with your ability to produce seemingly complicated charts that show things like the project's "cone of uncertainty," which makes them look good when they move it up the management chain.  When this happens, they turn into numbers junkies and you become their favorite pusher.  In other words, you da man.

Aside from using tools like this, it's useful to get a basic understanding of practical statistics, especially some of the common statistical fallacies like the so-called "Flaw of Averages" (see ), and some of the basic ideas like the law of large numbers and the central limit theorem.  You don't have to understand the theory as much as the professors would like; it's much more impotant to just learn to apply and interpret the stuff - I don't care about the equation that defines a Weibull distribution as much as I do about what the shape of it tells me.

You can supplement this kind of work with other tools like the free estimation tool available from Steve McConnell's company (  He also offers a very good estimation class that covers some alternate approaches to risk identification, analysis, and reduction.  I think he has a book called "The Black Art of Estimation" or something slated for publication that covers basically the same material, but the class is nice because you can grill McConnell.  Some of it is also covered in "Rapid Development."

The main thing to walk away with, though, is that just like production lines and project schedules, communication with management is typically compromised by a small number of limiting factors.  Being able to bring even a small amount of this stuff to the table goes an amazingly long way in strengthening your position.  Getting them to get some skin in the game is one of these.  Once you deal with these, do just like Goldratt says, find the next problem and tackle it, the same way.

Anyhow, I've included my email address on this post, so if you'd like to find out more or to tell me what a putz I am or whatever, feel free to shoot me an email :)  Or post here again, I read regularly.  Hope this helps.

CCPM user
Saturday, March 22, 2003

May I add another, I trust pertinent point, to CCPM's excellent commentary.

One unspoken assumption in the 'Flaw of Averages' is that estimates of the duration of tasks are independent of one another, i.e. that the distribution of values of s, e, and r (in CPPM's description) for each task are in not correlated. Therefore, if r (say) is 0.4 in one task, putting us late, it is just as likely to be 0.6 on some other task, putting us back on schedule.

This is a fallacy. In reality these figures can be highly correlated; if you underestimate the complexity of one task (i.e. s is too low) or your manager does actually let you have the promised 50% of a programmer's time (i.e. r is too big), it is highly likely that the achieved values of s and r will run throughout the entire chain of tasks.

What this indicates is that it's vital to monitor actual values of s, e and r in the early stages of a project. By getting a handle on these values you can refine not only the expected duration of the project, but also the range (variance) of likely completion times.

This underlies the technique used in the XP 'planning game'. Tasks are estimated in abstract units, often 'jelly beans', that are then converted to real durations by a 'project velocity' of jelly beans per hour. The achieved velocity incorporates the uncertainty in scope, estimation, and resource availability.

David Roper
Saturday, March 22, 2003

David Roper makes a couple of very important points that need to factor into any comprehensive attack on these problems.  The first point is that models based on historical data are almost universally better than models made from scratch.  Having this data available allows you to tune and calibrate your models early, saving a lot of time.  This means that it is in your best interest to track your project data closely.  Not only will this help you improve the accuracy and precision of your models, but it can also help you eliminate some of the subjectivity from the planning process. 

In the ideal case you would eventually have enough good data available so that you could change from an estimation problem to a classification problem.  For instance if you look at all your projects and find that most of the tasks can be grouped into ten buckets so that items in each bucket are similar, you can then track actual values for the items in these buckets.  This practice will allow you to construct a performance profile for each bucket.

Now, when it's time to start putting data on tasks, instead of trying to estimate the correct values for a task, you can first try to fit the task into one of the buckets.  If you can convince yourself that the task fits into the bucket, you can use the bucket profile as an estimator for the task instead of making one up.  When the boss wants you to alter the estimate you can say "I didn't make the estimate, it's based on historical data."  The only responses she has is to either accept the estimate, convince you your classification was wrong, or to ask you to alter the past :)  The first one is fine and the last one is impossible.  As for the second one, it's much harder to fight about a classification problem than an estimation problem.  "Boss, why don't you think that this task qualifies as a 'simple database access?' You think it's really a 'form based gui?'"

The second point is that your development process may have some systematic bias associated with it - this is quite common.  Manufacturing has developed an entire science, called statistical process control, to deal with the problem of identifying, characterizing, and eliminating this kind of variation from their processes.  Some of these tools would be hard to use with software because they require large populations to sample in order to work well.  Even so, understanding the ideas and principles that  these things are based on can give you a lot of insight into how uncertainty is affecting your processes.  This could be helpful in making more effective use of approaches like the one David describes for the XP planning game.

The best book I've seen on this material is "Understanding Variation" by Donald Wheeler

CCPM user
Saturday, March 22, 2003

*  Recent Topics

*  Fog Creek Home