Fog Creek Software
Discussion Board




Code Refactoring: A Story About The Big Mess

I own a consulting firm and about 90% of the work we do involves existing systems.  Very rarely do we start from scratch, often the choice is to refactor code.  This is due in part, to the many reasons that Joel has highlighted throughout his articles on this matter:

1. Time savings
2. Past bug fixes
3. Having a working version at any point in the process

Despite all of these good things, there invariably comes a time when one is presented with a project that is seemingly hopeless.  This occurred to us about six months ago. We were approached to do some work on a commercial product.  I was given a sample of their source code and a description of the problem they needed solved. 
They asked what could be done.  I replied, 'Nothing'.

I had never seen such poorly written code, not even from newly minted college grads (hell, even first year students!).  I had indicated that if they wanted the work to be done, we would start from scratch.

After some discussion, they accepted the proposal.  It was a unique opportunity, but it was going to be a lot of work.  Internally, the project was dubiously titled, 'The Big Mess'.

In this particular case we were faced with the following problems with the current solution:

1. Monolithic client application
2. Zero architecture
3. Extensive program logic in the database
4. Maintenance was costly for existing customers
5. Code base extremely poor
6. Real-time mission critical app which failed weekly
7. A long list of documented bugs

I cannot accurately describe the poor quality of the solution.  It was started 5 years ago by a 'self-taught' programmer.  It was handed down to a series of contracted individuals thereafter.  Each of which, having less experience and knowledge than the last, or so it would seem.

Having this presented to me, the client required:

1. A thin client interface
2. Distributed application data
3. Data collection via the internet
4. Web based reporting
5. Rewrite major algorithms the solution was based upon (due to market changes)
6. Integration with existing client systems
7. Real-time control systems

After 6 months of development, 'The Big Mess' has turned out to be a fine piece of software despite all of the agony in starting over.  In retrospect, I would still make the same decision.

This story was related because I feel it begs the following question:  If refactoring is OFTEN the best solution, when is it acceptable to start over?

Any and all comments are certainly welcome!

C. R. Millen
Monday, March 04, 2002

Although I usually agree with Joel, this is one issue in which I think more conditions are required to qualify an advice like 'never rewrite'.

Refactoring is definitely the way to go when you already have many customers, limited time, and a relatively good codebase to start from. By "relatively good", I mean that there are not many [known] errors in architecture, impedance mismatches -- and that there is some way to verify that the refactoring does indeed work, which means automated build, automated test cases, coverage tools, performance metrics, etc.

In more than one project I was involved, refactoring was simply not an option. If, for example, your communication protocols in a distributed system are poorly designed, too much work is often needed to set things right; It often cannot be done in a backward-compatible way.

I often preach to "design and code based on the assumption that exactly one year from now, everything will be deleted and rewritten from scratch". Not because I think that it should be done, but because planning for a complete rewrite focuses development so that it can be rewritten/refactored much better - it encourages data centric and protocol centric design, which are much easier to refactor than object oriented designs, in my experience. (They're also easier to write and maintain, IMHO, but that's for a different thread).

It all boils down to semantics - consistent, well defined operation semantics lend themselves to easy refactoring and rewriting. Inconsistent, ill defined operations don't. When the semantics are not well defined, a rewrite is often the better route -- assuming, of course, that you set all the semantics straight during the rewrite. Do note, however, that there is inherent impossibility in doing a 100% backawrd compatible consistent rewrite of an inconsistent project.

My 2 cents: Never start a rewrite without a contingency plan; e.g., keep supporting (refactoring and updating) an existing version until the rewrite is complete. If you can't afford to do both versions at the same time, you might be in trouble - choose either to keep supporting (and attempt refactoring, which may not always be feasible), or a complete rewrite, but take into account that at least one of those choices is probably going to kill the project.

Ori Berger
Monday, March 04, 2002

Of course there are situations when it is best to start over.  Just look into the three major advantages of refactoring you mentioned: Time savings, past bug fixes and having a working version at any time. So ask yourself: do I really save time by refactoring what is already there? (Very often you do, but when you are dealing with a really poor code base, rewriting from scratch might actually be faster, because you do not spend weeks on understanding how the current solution works (and why on earth it works at all)). Do I profit from old bug fixes (or did they just document the bugs and did not fix them anyway but told the user to work around)? And, if things turn out as bad as you described in your example, is this thing really to be considered as a working version?

You might (hopefully seldomly, though) find projects, especially when you inherit something that was done by someone years ago and has not been maintained since, which are not worth the effort of refactoring. Maybe the requirements have changed a lot, too.

But give the old code a fair chance before you decide to dump it. Be honest with yourself here. More often than other I look at older code written by someone else (monolithic C functions, mostly) and am close to deciding that it is ugly as hell and therefore probably total crab and when I look into it with a little more patience, I find that there are many elegant solutions buried in there that I can reuse. It is easy to pat yourself on the shoulder and say: "I can do better than that." It is harder to admit: "Well, it might not be nice and it surely would not be my way, but it is a perfectly acceptable solution for the problem."

I think, the main point in Joel's article was not: always refactor, never throw old code away, but rather: far too often, valuable resources, manifesting themselves in old code, are wasted because someone thought "I can do better than that." too easily.

Have fun,

Jutta Jordans
Monday, March 04, 2002

> (Very often you do, but when you are dealing with a really poor code base, rewriting from scratch might actually be faster, because you do not spend weeks on understanding how the current solution works (and why on earth it works at all)).

I can find it very tempting to rewrite instead of refactor, if the existing bad code does *not* work at all.

Christopher Wells
Monday, March 04, 2002

>I can find it very tempting to rewrite instead of refactor, if the existing bad code does *not* work at all.

In that case I would not call it "refactoring" anyway, but "repairing" :-)  It is what I meant by not having a working version.

Jutta Jordans
Monday, March 04, 2002

In my experience, everyone dislikes other people's code. Your brain just works differently from other people, and on the surface their stuff is always going to look bad. Think about the last time you were in the passenger seat and the driver turned down a side street and you thought "oh my god, this person doesn't know what she's doing" and it turned out to be a great shortcut.

I think this is another case for clear documentation and clear communication. Perhaps when you're writing code you should comment with the knowledge that years from now someone is going to look at your sloppy miserable code and want to scrap it because they don't understand it.

Mark W
Monday, March 04, 2002

>Perhaps when you're writing code you should comment with the
>knowledge that years from now someone is going to look at your sloppy
>miserable code and want to scrap it because they don't understand it.

LOL, and it might be yourself.  I've had that happen when I've gone back to code I wrote 10 years ago.

Of course, some of the code I wrote 10 years ago WAS sloppy miserable code :)

Mike G.
Monday, March 04, 2002

There are a lot of hidden assumptions in Joel's article.

1) The code operates on a hostile platform.
This is true whenever you write code for PCs, which Joel has specialized in throughout his career.  Code becomes ugly because the platforms are hostile, and the code must survive in it.

If your software is mainly a server app (which means you control the machine) and you have a "thin client interface," then you don't have too many problems here.

2) You are not writing a "Real-time mission critical app."
Could you imagine Microsoft writing the software for your heart pump?  It would run Windows CE and you could surf the web on it, but it's just not trustworthy.

Here, correctness is more important than relatively minor financial considerations.

3) The crufty code works reasonably well, sorta.
The code's value is in its acquired wisdom.  It Learned about the odd bug with server 5 after someone spilled Coke in it.

If you frankly think that the old code has no lessons to impart, or it speaks in riddles like the Oracle at Delphi, then what can you do?

Anyway, I think those are the main assumptions.

Sammy
Monday, March 04, 2002

Hmm, I think it is also important that Joel deals in "speculative software," instead of consulting.  Maybe there is also a tendency towards one-size-fits-all advice as well. ;-)  The topologies are different.

In fact, the article on "not-invented-here" springs to mind.  Slantwise contradictions?
[ http://www.joelonsoftware.com/articles/fog0000000007.html ]

Keith Reiter
Monday, March 04, 2002

I think it all boils down to control.  Joel is saying stay in control of your code.

If you are relying on other peoples code, you loose control.  If the third party lets you down, then all your good code can go to waste.

If you scrap all your existing, working, code and start from scratch you loose control.  For the duration of the rewrite you are unable to respond to external forces.  Like Netscape being unable to respond to Microsoft. 

I think the thing is to stay in control of your code, and be in a position to respond to the unexpected.  Like it says in the fire and motion article http://www.joelonsoftware.com/articles/fog0000000339.html

Ged Byrne
Monday, March 04, 2002


While there are certainly cases where a clean sheet rewrite is called for, I still think such cases are rare.  That alone is a radical idea to many developers.  In 17 years of working professionally, on dozens of projects, I've only come across one situation where a clean-sheet rewrite was really called for.  It, too, was a mission-critical (indeed, life-critical) system, which was written and hacked on by a series of people.  Crashed every few days, requiring a 24x7 on-call operation to support it.  It took weeks just to wade through enough of the system to understand that the problems were structural rather than just bad code (although there was plenty of bad code, too).

James Montebello
Monday, March 04, 2002

*  Recent Topics

*  Fog Creek Home