Fog Creek Software
Discussion Board

Code Reviews (Methodologies, sort of)

If I could see a show of hands, how many of you have participated in a formal code review, either of someone else's code or your own?

I don't mean an interface review, doc review, or functional review - that would be more like a specification review.  I mean having yours, or someone elses, code taken apart by private method and state and formally reviewed?

Personally, I never have.  I'm reading Alistair Cocburn's "Agile Development", p.144 "embelishment".  Very nice, a topic noted as 'uncovering "should"'.  He even makes the statement: "Personally, I tend to embelish around tests and design reviews".

XP/pair programming prides itself in the fact that every line written is peer reviewed code - so that would count as code review.

So...  are code reviews effective?  Necessary?  Since I'm camping out on Joel's site, I'll note that code reviews do not show up on "The Joel Test: 12 Steps to Better Code".  No item for code review there...

So hey - for the sake of argument, try this on for size: "there is no need for a code review unless some functional unit is chronically broken.  If a function is chronically broken, then there is a significant likelyhood that you might also have a "broken developer" in your midst.

As for interface design reviews, those are a constant "conversation" between client and object.  I like what Martin Fowler has to say best regarding this:  "Publish only if you have to, and as late as possible".

Nat Ersoz
Monday, May 20, 2002

I've done code reviews a lot. There are various stages where code review is done. You are right when you said "code is broken do a review" which is what happens most of the time.
My strategy is when people are new every code the write is review by me as I the leader. I look at every aspect of the coding like
- function length, naming conventions, assertion checks and some other things. Once guys get the idea then the code reviews will be come less and less. And like you said somethings get broken code review gets done while fixing. But its too late and nothing worthwhile is accomplished in terms of code quality. We have our coding standards in place which people ignore when they want to get things fast.
The best way is start at the beginning. My team is small and I can do it. For big teams this strategy may work

Monday, May 20, 2002

Code reviews are very common in Open Source projects, because the quality of supplied code can vary so much. Nothing gets into the Linux kernel without going through Linus, while more sanely run projects (apache, the BSDs) have a relatively small number of "committers" who have CVS access, and won't add any code they don't know well enough to trust.

The Mozilla project takes this to wildly bureacratic extremes and requires all patches to be reviewed twice - once by the module owners of whatever parts of the codebase the patch affects, plus an additional super-review by one of a core of "strong hackers" who perform another check, and also look at things like the coding style, possible cross-platform issues, the use of interfaces and so on.

I can imagine that a similar process would be necessary on other projects the size of Mozilla, Open Source or not. Having never worked on a project that large, I wouldn't know. It's a value proposition - at what point does the effort of performing code reviews become less than the effort required to fix the bugs they don't catch? Obviously, Mozilla passed that point.

When you have millions of lines of code, you _really_ need to spot where someone's not checking whether an argument is null EARLY, rather than waiting six months for another totally unrelated checkin to trigger the bug.

Charles Miller
Monday, May 20, 2002

Er, I _should_ have said: "At what point does the effort of performing code reviews become less than that needed to fix the bugs a code review would catch?"

Charles Miller
Monday, May 20, 2002

Code reviews are good at catching bugs where the developer thought none existed.

Johnny Simmson
Monday, May 20, 2002

As someone said above, when we typically get a "new guy", the person is typically assigned a maintenance project. 

After check-in, a Senior Developer typically looks through the code for exceptions, potential looping problems, naming conventions (lack/mis-use of hungarian notation), dead variables, etc.  This continues until the new coder "gets" our style.

When I've seen this done, the learning curve is shortened.  When it isn't, we ship bugs.

But that's just my experience.  As for multiple-people formal code reviews, we've done one or two, but they didn't seem to really shake out the bugs.


Matt H.
Monday, May 20, 2002

I agree with Matt... multiple people formal code reviews don't seem to shake out the bugs.  A one on one review with someone that actually knows how to code is usually best for that purpose.

I think it's due to the "groupthink" aspect when more people are involved, it's just easier to convince more people that something is correct and the subtle aspects get overlooked.  Groups seem to focus on "style" and conformance to established standards rather than what the code is doing.

One minor code problem was of the type with a condition like: "If a and b or c".  The developer easily convinced the group it would evaluate as "a and (b or c)" which of course it didn't but which would have been correct in terms of functionality.  When it executed as "(a and b) or c"... when c was true and a was null, it happily wrote over code memory, eventually causing a problem somewhere else far away from this statement.

IMO, code reviews with a group are more ceremony than practical.

Joe AA.
Monday, May 20, 2002

I was at a client creating a web application, where our team of about 30 developers consisted of 8 smaller teams, one of which I was a lead. After our code for the app was finished, we froze the code, and it was deployed to the QA environment. I then had to attend a code review with the client's top architect and one of their lead developers. I was asked to print out all of our modules code onto printed pages. We then sat for hours going over each piece with a fine tooth comb. This did absolutely no good in my eyes and the best contribution came in the form of code optimization/refactoring and error checking that we may have missed. My objection to it was the fact that I had to print it out for each person at the review and it was about 250 pages of code and  also that it had come at the very end of the release. It did not offer enough value because we were all in "look blankly at paper" mode and we still had just as many bugs as the other releases where we had no review.  The method for app development there consisted of big ass design up front, each team would code a section of the app with no unit testing, then integrating in QA, then coming back and fixing all the bugs. It worked, but I don't think it was as effecient as having ongoing code reviews by senior members during production and constant integration.

I had a better experience on another project where the code review was done by pairing developers. When the code was checked into CVS the lead developer/architect would do his own review and address anything he saw that was incorrect and then come and explain to us why it could have been done better and how to address it in the future. Sometimes that conversation would be: "you can do x better if you use y".. sometimes it was: "(scribbling on whiteboard) I dont ever want to see this $^#%! again!". It was a more strict method but we released with very few bugs and the proper techniques where propogated to the less experienced developers immediately instead of in hindsight.

Ian Stallings
Monday, May 20, 2002

I have never done "formal" code reviews.  Instead we do "informal" code reviews.  The format is general 1 to 2 hours and we try to cover roughly 200 lines of code from 3 or 4 developers.  We try to do this every two weeks.

This seems to work pretty well.  We cover significantly more code than we would in a formal code review.  We still don't keep up with the amount of code we generate, but we cover a reasonable fraction of it.

Everyone's code gets covered on a reasonable cycle (about every 6 weeks at my current company--every 2 or 4 weeks at my previous company).

We cover a variety of topics in code reviews.  For example, we found memory problems (leaks and double deletions), style problems (using more than 80 columns makes code hard to print), and generally discuss code and design.

We don't read through every line of code during the actual review time.  Instead, we go around the room and have each developer bring up any issues they found when reading the code.

Scott MacHaffie
Monday, May 20, 2002

"Instead, we go around the room and have each developer bring up any issues they found when reading the code"

I like this... 

Nat Ersoz
Monday, May 20, 2002

I have participated in informal code reviews for a long time (about 20 years), and formal code reviews for a short period during that time. In my experience, the formality didn't add much, except for causing otherwise-reluctant people to get started.

In general, I love code reviews. In addition to finding bugs, they also frequently find better ways to do things. They're a great means of education:  either the reviewer learns something new from what the author wrote, or the author learns new techniques from the reviewer.  Frequently both.

But the most important benefit of code reviews is also the most overlooked: almost everyone writes code differently when they know that someone else is going to read it. So the first 75% of the benefit of a code review is simply the knowledge that there will be one. All else is gravy.

Given this, the answer to when you stop doing code reviews is easy:  never.

Jim Lyon
Monday, May 20, 2002

I've been involved with a couple of different kinds of code reviews, which enjoyed varying levels of success:

1. The first code review I ever participated in was when I was leaving my first "real" job. I had at that point been designing, implementing, and  enhancing a fairly extensive API for an embedded software project for about two years, essentially all by myself.

The purpose of the review was to transfer as much information as possible to the person who'd be taking over the API (the programmer who was responsible for the client applications, as it turned out).

We spent the better part of a week going over internal implementation details, areas where I knew the software was lacking robustness, critical performance bottlenecks, etc.

From what I heard afterwards, that code review was the only thing that kept that guy sane in the coming months. By the end of the process, he knew almost as much about the API as I did. He took copious notes, which served him well later.

2. At my next company, they generally didn't do code reviews, except during code freeze periods prior to release.

The way the process was supposed to work was: When you fixed a bug, you got your fix reviewed by somebody that understood that part of the code well, then you had to defend the change in front of a Change Control Committee (CCC).

The members of the CCC (including representatives of QA, Engineering, and Tech Support) would evaluate the "risk" of the proposed change, and either allow or deny the change (denied changes usually went into the next version).

The effectiveness of this process was wildly variable. Often times, the most appropriate person to review a change wasn't available to review it, or they were under too much pressure to fix their own bugs to devote much effort to reviewing somebody else's code. The CCC was often more of a rubber-stamp affair rather than an honest attempt to evaluate the risk of each change.

I think one of the big problems with this method is that it added extra work to the development process right when the developers were feeling the most pressed for time.

3. After that, I went to a small startup that was developing a new embedded microprocessor, as well as their own set of software development tools (compiler, debugger, and IDE).

I was a member of the QA team for the debugger. As part of the usual development process, we had weekly code review meetings.

The way these meetings worked was that each week, someone would bring in some code they'd written (usually to implement a new feature), and three or four other people would read through the code and ask questions and make comments.

Yes, this was powerfully boring. And it got really quiet in the room sometimes. But when a question was asked about one part of the code, often that would lead to discussions about other aspects of the code under review.

One thing that always amazed me was the number of really stupid little things that we caught in these reviews. A typical exchanges might go something like this:

Me: What happens if this call on line 217 fails?
Developer #1: No problem, it returns zero in that case, so we just skip over this loop without doing anything...
Developer #2: Actually, that function returns -1 in the case of failure.
Developer #1: Hmmm. I guess I should check for that.
And just like that, we've eliminated an infinite loop in the code. Don't even get me started on the excessive use of the  "unsigned" types in C...

One of the nice things about having several people in the room at once is that knowledge about the "right" way to do things diffused through the group naturally.

Actually, one of the unexpected (to me, anyway) benefits of doing this kind of code review is that all the comments in the code were accurate, which saved a lot of time for people trying to figure out things later.

Well, that was longer then I intended, but hopefully there's some useful info in there.


Mark Bessey
Monday, May 20, 2002

Code reviews are essential, in my experience.  They provide accountability, make the project more transparent, encourage uniform coding style, disseminate good programming practices, and last but not least, they find more bugs per hour than black box testing could ever hope to find.

Timing is an important element of having code reviews, however.  Just having code reviews may not be enough.  Some over-zealous groups will want to review early code which is still in a state of flux; other groups will do "code reviews" at the very end of the project, when there is absolutely no chance to actually act on any of the insights the review process brings.  Both are useless ceremonies and will do more harm (in employee morale) than good.

Our process went somewhat along the lines of this:

1. The developer completes a major feature add, is confident with her work, and gives the OK for the review.

2. Three or four of her peers take her diffs, print them out on paper, and spend about an hour or two per 400 semicolon lines of code "prep time".

3. In the logging meetings, everyone gets together and goes line-by-line through the code, itemizing each issue as they occur.  Each issue mentioned is written down or stored in a database.  It is important that debate be minimized during this meeting, as it is rarely fruitful and just bogs down the meeting.  Each meeting should last no longer than two hours and covers about 400 SLOCs.

4. The developer goes back and investigates/addresses each issue raised in the meeting.

5. A follow-up meeting is optional (to ensure coverage of the issues raised in the meeting, or to verify certain issues were handled correctly).

6. The developer's branch is merged into the main trunk.

Bugfixes and other maintence changes need not go through this process, but each change would also be reviewed and approved by at least one other developer prior to merging.  The upshot of this process is that every line of code has been seen and understood by at least two people prior to commiting to the tree.

Monday, May 20, 2002

I found code reviews very helpful in establishing a common coding style in a team of developers (and by style I do not so much mean naming conventions, indentation and other rather cosmetic stuff, but decisions like "What do we make const?", "How do we handle errors?" and the like).

I am really pro code reviews and I try to make them part of our development routine by convincing my colleagues of their usefullness, but there is a great reluctance in some of them. Their major argument is about time normally "This has to be finished yesterday, so don't bother me with something like a code review.", but I think, this is not really the point.

Actually, I think it is very difficult to handle a code review in a way that does not offend the person who has written the code. After the first reviews of my code, even though they were done by a colleague who acted as my tutor at that time and tried to tread lightly, I was close to tears. It just does not feel too good to be told what you do wrong; I tend to be very emotional about my work. I know that I learn a lot from code reviews, still I find it hard to accept the criticism involved.

I did pair programming, too, on a smaller project, and that was even better from the learning side and also felt better emotionally, escpecially when the driver changed often, because it becomes more of a give and take.

Have fun,

Jutta Jordans
Tuesday, May 21, 2002

One idea that my wife told me has helped a lot in code reviews: don't bother with more than two or three things.

You should always point out all of the bugs so that they can be fixed.  But for stylistic things, most people can only handle about two or three things.  Just pick the top couple and stop there.

Scott MacHaffie
Tuesday, May 21, 2002

jutta - i suppose to to an extent you are right, it is important code reviews do not get personal. to be honest though, if by doing that people go easy on their colleagues, that a) misses some of the point of reviewing, and b) misses out on some of the fun.

reading code is a boring job and unless there is some pay-off it is likely to be something people don't want to spend time on. as long as there are no managers present, reviews can be good fun if done in a spirit of (light hearted) competition between developers. as long as the people involved are playing the same game this can actually be quite a lot of fun :-) introduce someone, like a manager, who might want to use code review results for some other purpose, and they become stressful, and things to avoid.

Tuesday, May 21, 2002

At my last job, I really liked the system that we had in place:

A developer would be assigned a trouble ticket to fix or a new feature to implement.  The developer would come up with a fix.

Before checking in the changes, the developer would put together a Cutover Sheet, which contained a brief summary of the change made or the feature implemented, with a printout of the affected code attached.  If only a few lines were changed, a printout of just the surrounding few lines of code was sufficient, and diffs were acceptable as well.  This was meant to be a lean and mean form.

The developer had to show the Cutover Sheet to a colleague, who would review the code him/herself.  That colleague would have to sign off on the Cutover Sheet.

The Cutover Sheet then went to the developer's manager for a final review before the code was checked-in.  Completed Cutover Sheets were kept in a drawer for a few months so that we could track down culprits. ;-)

This sounds time-consuming, but in practice, it only took about 15 minutes to fill out a Cutover Sheet and print the code, and the colleague and manager would only need to spend 5-10 minutes each looking over the code.  The code would be reviewed by two people -- one with little "bias" and one that was very familiar with the problem that had to be solved, capturing the advantages of each.

We caught a lot of bugs and inefficient code this way.  We also each learned a lot more about the system, because we were often being introduced to alien pieces of the system through other developers' Cutover Sheets.

Both at that job and in my current job, we did a few formal code reviews, in which a group of us all studied a printout of code, one page at a time.  I found that, while we often didn't catch many bugs in the meeting, the knowledge of a pending code review would push us to clean up our code more than we would have if our code had stayed unseen.

Brent P. Newhall
Tuesday, May 21, 2002

Personally, all of my experiences with code reviews have been bad.

Pair programming, much better. But code reviews always seem to turn out to some sort of geek grandstanding competion. Whether the code works or not is insignificant compared to whether the lines are indented properly and if the naming conventions agree with what the least literate person on the team deems "suitable".

Add to that the lack of social skills of most software developers and it's all completely annoying. Having people hand you huge wads of paper, which have wiggly lines next to things with "this sucks" type comments on it is not at all good for my morale and doesn't endear me to the writers.

At no point were the contributions at all constructive. It was like being back at school. "this is wrong, this is wrong, oh and this is just hopeless. Redo it." No-one ever wanted to know the reasoning behind algorithm choices, they just wanted to demonstrate their alpha-maleness.
Code reviews end up confrontational and are disempowering to those developers who are completely competent but just aren't up to partaking in yelling matches. It depends if you value competent developers or just the ones whose skills lie in "winning" confrontations.

Katie Lucas
Tuesday, May 21, 2002

I live in a different space-time continuum from Katie.  My experience with code reviews, though limited, has generally been good.

The reviews I have participated in involve a couple of steps.  First, the reviewers review the code.  Second, there is a meeting to discuss findings and decide on changes.

In my current project there is a web based commenting system, sort of a discussion board, that allows for on line comments and discussion between the developer and reviewers.  There is still a meeting for making final decisions.

A few years ago a project I worked on tried code reviews where the reviewers got together and went over the code line by line in the meeting.  It didn't work and we gave that up rather quickly.

I find that informal self reviews can be useful.  I print out the source code and read through it.  If I am looking at it for overall understanding, I see problems that I missed when I was at the terminal trying to decide what the next line would be.

Tuesday, May 21, 2002

Katie:  Out of curiosity, did your code reviews include a strong moderator, one who pushed the review along when it was stagnating?

It sounds like those code reviews in which you were involved fell victim to bad personalities.  I wouldn't want my code reviewed by developers who grandstand either, unless there was a good, strong moderator to direct the proceedings when people started nitpicking about indentation.

Personally, I don't mind nitpicks, as long as they're parenthetical and extremely brief.

Brent P. Newhall
Wednesday, May 22, 2002

I have to say that I can relate Katie's writings - not with respect to code reviews but the architecture "discussions".

Often, it is hard to authoritatively say  "this is best because..." and come away with an analytical objective reason why A is better than B.

For me, simpler is always better.  It also agrees with the XP notion that I only implement features when required.  I don't code future features today (this reduces software cost and investment, nicely spelled out in XP explained).

Others, however, see extensibility as the most important architecture decision. This rarely leads to minimalist implementations, let alone testability.

Which way do you go?  It most often comes down to "guru" opinion, group think, or corporate doctrine.

Nat Ersoz
Wednesday, May 22, 2002

I have found code reviews to be both worthwhile and useless.
They are useless (and aggravating) when they are conducted in organizations that do not know how to hold *any* kind of meeting.
They must have a stated intention and goal which everyone agrees is attainable in the context of a (two-hour?) meeting, and the moderator must be *strong* and committed to keeping egos outside the door.
There *are* ways of doing this. Generally, if more than a couple of people in an organization complain about the efficiency of reviews, it's time to get some training, or perhaps new moderators.
When done correctly, I have found them cost efficient. I.e., they produce more value, in a variety of ways (including mentoring), than they cost.
I miss them - they don't believe in such things where I work now. (At least they're frank about the non-commitment to quality.)

Wednesday, May 22, 2002


My experience has been like yours. You have identified the problem correctly too I think -- it occurs when testosterone kicks in and the reviewers are under age 26.

Here's a typical exchange,

ChimpBoy: "This code of yours to exchange two variables is incorrect:

temp = a;
a = b;
b = temp;"

Me: "What's wrong with it?"

ChimpBoy: "You should have used the more efficient and standard idiom:

a ^= b ^= a ^= b;

This is better because it does not wastefully use a temporary variable and is thus much faster and more efficient. It is also easier to understand because it uses two variable names instead of three. Recall the Rule of Seven."

Me: "Yes, I have seen the XOR exchanger in the Obfuscated C Contest but I don't agree with you that it is faster and I certainly don't agree with you that it is easier to understand or more maintainable."

ChimpBoy: "Well that's where you are wrong and I won't have you arguing with me -- I just want you to change it. You think you know everything but you don't."

Me: "Let's profile each one and see which is faster."

ChimpBoy: "I won't tolerate your backtalking to me. Besides, profilers can lie."

Me: "I can produce the compiled assembly if you like and tabulate from the CPU user manual for you the number of clock cycles for each method."

ChimpBoy: "Just change it and stop being a smart ass."

Me: "Yes Sir!"

X. J. Scott
Wednesday, May 22, 2002

X.J. Scott:  It sounds like you've been dealing with bad co-workers, which doesn't necessarily reflect on the code review process itself.  Any process can be rendered worthless by sufficiently annoying people.

Brent P. Newhall
Thursday, May 23, 2002

All of the previous comments are correct - some of the time...

I advocate code reviews but they have to be lead by experienced people and you have to read the code ahead of time noting where problems are or could eventually be. Yes they take a long time but the benefits outweigh the disbenefits.

I truly feel for the girl who had to put up with the jackass who advocated that obfuscated bullshit. I would fire him.

As for hurt feelings - well if you are wrong you have to admit it and if you are not as experienced then listen and learn. Don't take correction as a personal insult unless it continues year after year - in that case I suggest a new career! In my experience I have determined that 95% of software developers are pretty much crap and the top 2% is 10 times (at least!) as productive as the bottom 98%.

Let's face facts though. If we actually listed the time it takes to implement a quality process based on code reviews few projects would get approved! So in reality, crappy code gets shipped just like crappy cars etc.

Peter McKinnon
Friday, June 7, 2002

I thought cars were crappy so that people have to upgrade them every few years. Planned obsolescence.

I was going to make some sarcastic joke about the same thing being true of software, but I thought about it and I realised that the analogy just doesn't hold. Software doesn't "decay" (something Joel has commented on before). But the environment it exists within changes rapidly. The exact opposite of vehicles, where their environment remains pretty much the same, but they do slowly wear down.

What this has to tell us about software development, I'm not sure. But I don't think the reasoning behind shipping bad cars is anything like the reasoning behind shipping bad software.


Adrian Gilby
Friday, June 7, 2002

My point I intended to make in reference to shipping crappy cars is that businesses exist to make money - not perfect products. Therefore, quality often takes a back seat in the interest of making a buck. The short term interests of whoever is in charge (e.g. marketing and sales executives) trumps the needs of perfectionists. Make the quarterly sales target and get the bonus or promotion - cash in the stock options and move on to the next company etc. What do they really care?

Ok, so car production maybe isn't a perfect analogy, but hey - I didn't take the time to review it...

As for the notion that software doesn't decay - well I'll have to assume that this is a semantics-based argument. Having said that I'll have to review what Joel said.

Peter McKinnon
Friday, June 7, 2002

*  Recent Topics

*  Fog Creek Home