Hard Data
I want to point out that doing real experements on productivity is not impossible as Joel makes it out to be.
You don't need clones of programmers, operating in tightly controled environments, building the exact same software.
But you do need many teams, and a quantitative measure of productivity.
Setting up a scientifc experiment is straightforward. One half of the teams you randomly designate as the control group, and leave them alone. The other half is the experiment group. For the experiment group, you test one variable (i.e. putting everyone in private offices with closing doors). Then you sit back and watch the teams in both groups.
To form your conclusions, you don't base your results on individual teams, but on aggregate properties of the control group and the experiment group. That way variables unrelated to your experiment get "averaged out."
JD
Saturday, November 15, 2003
I'm not convinced. More and more I'm coming to think that "scientific" experiments -- namely, anything involving a study group and a control group -- are bound to produce misleading results if (a) they involve people and (b) the thing that separates the two groups is not subject to absolute control (e.g., a drug that can be given or withheld). Even if you tell people not to do something -- perhaps especially then -- they'll figure out ways to sneak the forbidden technique into their work.
Besides, this approach has some ethical problems depending on how it's applied. For relatively inconsequential things like productivity studies, at best you're still asking 50% of your subjects to do the Wrong Thing, and probably wasting money and goodwill. But in more vital areas like, say, studying the effectiveness of a particular collaboration technique for social service workers, you're taking half of a population who are eligible for services (which they probably desperately need) and then denying them those very services. It's like rescuing every other passenger on the Titanic so you can see how long it takes the rest of them to freeze to death. (=
Sam Livingston-Gray
Saturday, November 15, 2003
Dear JD,
Your proposed experiment will have little or no value. Your sample group is way too small. It is possible that one of the groups is more productive than the other anyway. You need to get rid of the variables, and there are many more than you think.
Stephen Jones
Saturday, November 15, 2003
whos was the experiment that showed that if you increase the amount of light in a factory, the workers become more productive.
OTOH they showed that if you _reduce_ the amount of light in a factory, workers became more productive.
in the final analysis they decided that simply being studied made the workers more productive.
<g> makes it _really_ hard to separate the effects of the particular thing being studied from the effects of the study itself when people are involved.
FullNameRequired
Saturday, November 15, 2003
I believe that's called the Heisenberg (?) principle?
Geoff Bennett
Saturday, November 15, 2003
FullNameRequired, I guess if thats the case, maybe it wouldn't be such a bad idea to conduct useless experiments eh? :-D
Vince
Saturday, November 15, 2003
"FullNameRequired, I guess if thats the case, maybe it wouldn't be such a bad idea to conduct useless experiments eh? :-D"
<g> Ive often wondered about that. I mean, it sounds to be as if all the workers want is attention....
FullNameRequired
Saturday, November 15, 2003
--whos was the experiment that showed that if you increase the ....
I *think* it was Frederick Taylor. Search for "Taylorism". The subject is "scientific management".
Justin
Saturday, November 15, 2003
The "father of management" is always considered to be a French guy.
Stephen Jones
Saturday, November 15, 2003
> I believe that's called the Heisenberg (?) principle?
No, it's called the Hawthorne Effect:
"Initial improvement in a process of production caused by the obtrusive observation of that process. The effect was first noticed in the Hawthorne plant of Western Electric. Production increased not as a consequence of actual changes in working conditions introduced by the plant's management but because management demonstrated interest in such improvements"
Portabella
Saturday, November 15, 2003
The Hawthorne effect basically states that people are charmed, or motivated short-term, by change. It perks them up.
More relevantly perhaps on measuring productivity, is Gilb's Law, which states:
"Anything you need to quantify can be measured in some way that is superior to not measuring it at all."
Basically, accepting that you can't measure software dev productivity is not wise, it perpetuates this. You can measure it, it's not perfect by any means, but it will almost certainly tell you something useful.
Andrew Cherry
Saturday, November 15, 2003
I am now beginnig a study of JOS readers to see what is needed to make their post more informative and insightful. :)
sgf
Saturday, November 15, 2003
Sam,
You make some interesting points. (a) I don't see why involving people invalidates the results. Could you elaborate? (b) It is ok if the experimental variable is not subject to absolute control. Though it is important that you can measure it. For example if you give everyone private offices and they move most of their work out into common areas anyways, that certainly indicates something interesting.
I should be more clear that for the control group you do *nothing* except monitor them. You don't take away their private offices, you let them do whatever they do normally. This excludes you from performing some experiments. For example if the status quo is to give programmers private offices, then running an experiment where you give everyone private offices will not change anything. So you won't get any interesting results. But this isn't really a problem. You could still run the experiment where you take away people's private offices, and base your conclusions on that.
As for the Hawthorne Effect, remember that you are monitoring both the experiment group *and* the control group. So the effect should be the same on both, and its influence on your results will be minimal.
JD
Saturday, November 15, 2003
JD, its just so hard to quantify "productivity" when it comes to software. You'd really have to test the two groups against themselves, meaning, give both groups an assignment, see how long it takes in normal work conditions. Take group B, put them in offices, hten measure their "increase" in productivity against themselves. The problem with this is, what if on the new feature or application, Bob from Group B recognizes that he did this back at company ACME, and so he can do it in a couple hours. Maybe steve just got a new girlfriend, and he's leaving at 6:00 instead of staying till 10:00 every night. I think there are too many variables to be able to control.
Vince
Saturday, November 15, 2003
I'd think any manager that has to ask "what can we do to make workers more productive" has a long list of options available before they start conducting tests. The problem is that those options are, from a management perspective, generally distasteful.
For example, there are reams of scientific evidence that short naps after lunch increase afternoon productivity and morale. It also simply makes sense. Think any of these "how do we increase productivity" type managers would even THINK of going that route? No way.
Business casual, flex working hours, real leadership - all things that can make people happier at their jobs and therefore more productive. But since they require more care to monitor, it ain't gonna happen.
"How can I increase productivity" is pretty much like "how can I lose weight" - they don't want to hear "eat less and exercise" - in both cases the asker is looking for a "quick fix" that won't require actual work.
Philo
Phi1o
Saturday, November 15, 2003
Vince,
I think you've hit upon the crux of the problem. It seems very hard to quantify productivity, except in very artificial situations. I don't have any answers here. But that doesn't mean there aren't any. I don't have any familiarity with the software engineering research that has been going on in industry and academia for decades.
I should clarify another point too. It does not matter if Bob from group B recognizes that he's done something before and does it in a couple of hours. The point of using a large group is that not everyone in group B will have seen the problem before. And even if they have, the members of the groups are selected randomly from the subjects of the experiment. So it is most likely group A will have done the same too. Your results will still be meaningful. That's what I meant by "averaged out."
Phi1o,
You are right, it is not the place of managers to be running experiments on their employees. Just as it is not the place of practicing doctors to test out new drugs on their patients. Scientific studies of programmer productivity are a separate activity from software development.
JD
Saturday, November 15, 2003
Ok, I see what you mean by averaging out. That can certainly account for some discrepencies. What I meant by non-quantifiable though, is that how do you measure if group B (even if its the averge of 5 group Bs) is more productive. Did they finish faster? Is their code cleaner? Is it more optimized? By the same account, its difficult to get a couple groups wiht the same skill level. People have all kinds of strengths and weaknesses. I really think the only place this could be done, is perhaps in a university setting, where there are tons of students going through the same exercises. There may be enough that the occasional experienced guy that can do it 100x faster then everyone esle may get averaged out.
Vince
Saturday, November 15, 2003
Wow JD - you *completely* missed the point.
Philo
Philo.
Saturday, November 15, 2003
it's just doesn't make sense at all.
yoyomama
Sunday, November 16, 2003
Another thing that hasn't been brought up. Most software teams arn't large enough where the "average" developer matters. It could be that my team works really well in one big room, even though 60% of software teams perform slightly better in offices. I know guys who don't listen to music, others who blast techno, others who blast classical.
My point is, unless your managing hundreds of programmers, its probably better to just give each individual programmer what he wants, within reason. If a team of 5 guys wants to share an office, let them. If Joe the loner wants a private office so he can blast AC/DC, let him. I imagine you'd end up pissing more people off trying to implement some general "improveing productivity" order.
Vince
Sunday, November 16, 2003
Recent Topics
Fog Creek Home
|