Fog Creek Software
Discussion Board




Measuring Performance

In another thread, Tapiwa says: "To base one's investment solely on the past is almost like driving by looking in the rear view mirror."

This is actually something that I think has broad applicability and something that bothers me at work.  Most of the "measurements" that the managers that I know are using to judge team "performance" are backward looking.  They rely on extrapolation of the past into the future, often with dire consequences, in my experience.

For instance, one of the popular measurements is "performance against schedule."  There are several ways to compute this number, but all of them rely on counting up the number of tasks done to date and comparing that number with the number of tasks that were planned to be done to date.  If the first is bigger than the second, throw a party.  If the second is bigger than the first, light fires under arses.  But this number is basically meaningless.  You can't rely on it to tell you if you will meet your deadline or even what date to move the deadline to so that you can meet it.  So why judge performance based on something so obviously contrived?

Anyone out there aggravated with this kind of thing?  Got any better systems to use instead?

Mr. Metric
Monday, September 15, 2003

<shrug> its just something thats hard to judge.  Pretty much all the methods of judging progress and rewarding productivity have holes you can drive a truck through.
Unfortunately it _is_ something we need to do, so we use the bad methods to do it because there are no others.
Overall if used by intelligent people who understand its weaknesses and its strengths, and everyone involved acts in goodwill, most systems will do a half-decent job of measuring stuff.
<g> explaining that to some people can be difficult though.

FullNameRequired
Tuesday, September 16, 2003

Big companies like to use methods and procedures in an attempt to ensure consistency, but in reality most decent managers just use 'gut instinct' or ask their most trusted developers for an informal opinion.

e.g. measuring by deadline - are you measuring the developer or the estimator?

Most developers are better at some things than others, was the developer given appropriate tasks?

If a developer was slower than others is it because the code quality is higher? Conversely, is a fast developer fast because inadequate testing is being done?


Tuesday, September 16, 2003

"Unfortunately it _is_ something we need to do, so we use the bad methods to do it because there are no others."

There are plenty of others.  A really simple one is to update the plan with all current information and see where the end date falls.  If it falls after your deadline, then based on the best available information you have a real risk of being late.  If it falls before the deadline, then the best available information says you will make it.  This is a much better measurement of "performance against schedule" than the usual one.  There are even more powerful variations on the same theme that allow you to assess the risk associated with meeting your dealine.

The big problem is that people that are put in charge of creating and managing schedules are taught the most naive approaches to accomplishing the task, assuming they are taught anything at all.  Very few have any sort of experience with or knowledge of sound scheduling practices.  The one mentioned above, though foolish and misleading, is all too common.

anon
Tuesday, September 16, 2003

Today I had the revelation that even the most sophisticated of schedules is still a wild-ass guess.  However, the fact that a schedule even exists means three things: (a) the goalposts are set, (b) the goalposts aren't infinitely far away, and (c) progress is made daily towards the goalposts.

Schedules are a useful fiction.

Alyosha`
Tuesday, September 16, 2003

Nothing wrong with guesses as long as the uncertainty is acknowledged and managed.

anon
Tuesday, September 16, 2003

The numbers are not meaningless, they simply cannot give you a very accurate answer.

1. Some form of feedback is vastly better than no feedback.
2. The scope of many things cannot be well predicted and so the danger of these systems lies in wasting excessive amounts of time trying to predict what cannot reasonably be predicted.

Richard Kuo
Tuesday, September 16, 2003

Accuracy is possible, as long as you do not require precision.  There are ways to absorb uncertainty in your schedule.  See Frank Patrick's website at:

http://www.focusedperformance.com

for excellent discussions of how to do this properly.  The site also has a lot to say about measuring performance correctly.

anon
Tuesday, September 16, 2003

"Today I had the revelation that even the most sophisticated of schedules is still a wild-ass guess.  However, the fact that a schedule even exists means three things: (a) the goalposts are set, (b) the goalposts aren't infinitely far away, and (c) progress is made daily towards the goalposts.

Schedules are a useful fiction."

Careful, Philo got hung out to dry for comments like that one.  <g>

Kevin
Tuesday, September 16, 2003

Note that performance measurements can be misapplied, and can be applied well.  I've seen it applied well.

A healthy organization uses past experience to provide a reasonable approximation of the probable future.  Using it as a precise oracle of the future is foolish, and healthy organizations won't do that.  But it's better to use real past experience than random guesses.

Where I work, we've used the number of documents completed per week as an indication of when we'll be done.  It's inaccurate, because many documents are dependent on other things that won't be completed until late in development, so realistically we'll complete more documents at the end than at the beginning.  But it's a useful reality check.  Seeing that the documentation progress has flatlined for the past three weeks is a powerful motivator; it shows just how far away we are from our goal.

So, measurement of future performance based on past performance is neither a "wild-ass guess" nor perfect.

(Which is basically what others have written here.)

The Pedant, Brent P. Newhall
Tuesday, September 16, 2003

The point is, future performance against what goal?  If you think about it, the measurement described early in this thread is really trying to measure the effectiveness of the project at tracking the schedule on a task by task basis (or alternatively, the effectiveness of the schedule at predicting the project - toss that one at 'em next time they start ranting).  But the goal of most projects is not to track a schedule, it is to deliver a product. 

This measurement fails miserably in having anything to say about that.  For instance, think about the case where every task not on the critical path is behind, and every one on the critical path is on track.  Clearly the end product is not (yet) in jeopardy, despite the fact that this measurement indicates that the project is performing horribly. 

Alternatively, have just one task on the critical path behind and every other task on track.  The end product is at risk, but the measurement would indicate that all is well.

Now, think about a manager receiving this kind of information.  What kinds of behaviors will this measurement induce in him?  In the first case, he'll scramble trying to save a project that really isn't in jeopardy.  In the second, he'll probably toot his own horn at review meetings all the way up to the last integration task at which point the project goes over schedule, over budget, and basically flops.  And he never saw it coming.

The measurement described above assumes that product completion is a linear function of task completion  (i.e. weeks = tasks / (tasks/week)) when a project of any complexity whatsoever is clearly a nonlinear system which is also subject to large amounts of variation and uncertainty.

My feeling is that numbers like this appear on the scene because 1.  a number is needed 2.  deriving/measuring a good one is often hard, and 3.  numbers like this will pass a cursory first inspection because they seem logical (when you hold naive assumptions about things).  Worse still, when you do introduce a high quality measurement into the mix, people get glassy-eyed when you try to explain what is measured and how to respond to the measurement.  If you disagree, then try adding an indication of uncertainty to your measurement at the next review and watch the rending of garments and the gnashing of teeth.

anon
Tuesday, September 16, 2003

One common pathology arises because managers without experience in development are often accustomed to pushing work along to meet deadlines, and this includes cutting corners.

With most business tasks, this is OK. A report that's 95 percent done can seem just as good as a 100 percent finished report. A sales program that got 1020 leads is nearly as good as one that got 1100.

But software doesn't work like that. If it's not finished, it doesn't work. It can't be pushed in the same way.

The second factor is that schedule management in development is typically a process where people with poor communication skills and a poor understanding of asserting requirements must engage with sophisticated manipulators. The result is that poor managers' flawed assessments gain greater traction, while often innocent parties shoulder more of the blame, often in scenarios that an outside observer would consider valid.


Tuesday, September 16, 2003

Note the assumptions in both of the previous anonymous posters to assume that managers will have immature reactions to schedules.

Which is my point.

The Pedant, Brent P. Newhall
Wednesday, September 17, 2003

Actually, I'm not assuming that managers will have an immature reaction to the schedule.  What I am assuming is that given a measurement that supposedly is an indication of "project health,"  a manager will try to act rationally based on what the measurement is telling him.  Unfortunately, the measurement defined by the original poster will be giving that manager low quality information.  This will lead the manager to take actions that are wasteful and possibly destructive to the ends of the project. 

It is the measurement that is the problem, not the manager's reaction to it.  See what I mean?

anon
Wednesday, September 17, 2003

*  Recent Topics

*  Fog Creek Home