A method of coding
1. (Re)Write code to do something
2. (Re)Read code to ensure it's sane
3. Compile/run code
4. Observe if code actually does what you want
5. Repeat step 1 until code works and all code is written
How many people use this method?
What method do you use?
Ted E. Bear
Sunday, November 2, 2003
Step 0: Write unit test/s
Step 4: Check if test/s pass
Rhys Keepence
Sunday, November 2, 2003
I always liked:
1. Get idea.
2. Start coding.
3. Magic happens here.
4. Go live.
5. Move.
Jack of all
Sunday, November 2, 2003
1. Get clear statement of what the next piece of code you'll work on is going to do.
2. Sketch the code in comments, paying most attention to the logic of the code.
3. Stub out the code and start filling in as necessary.
4. With each addition, write the test for that addition.
5. Compile/test until the addition works as expected.
6. Move on to next piece of code.
Not exactly unit testing, but taking small steps and ensuring it works at every stage has led to some pretty reliable code for me.
Justin Johnson
Sunday, November 2, 2003
There's a lot to do before step 1: specs, architecture, detailed design.
I often skip step 2; if I wrote it, and it compiles, then it must be sane. :-p
Sometimes, step 2 happens all at once at the end (a code review before it's handed over to QA).
Sometimes I can compile it as I write it, but can't run it until the end. For example, I'm currently writing a process which implements a protocol to a remote component; until what I've written is completed, it doesn't implement the (entire) protocol and therefore there's no sense in trying to run it.
anon
Sunday, November 2, 2003
Get a set of requirements together, and some use cases.
Design an architecture that can handle what I need, plus an additional 20-30% extra variability where I haven't forseen the need.
Implement a usable subset of the architecture, enough to start testing feasability and build the rest off of it. Debug the crap out of this subset. Write harsh unit tests for this. Document this part really, really well.
Go back to the requirements and use cases, see if the architecture really could pull of what you need it to do. Adjust as necessary.
Write the rest of the app on top of the architecture. Debug each part. Each new part will probably help debug every prior one. Adjust the arch as necessary, leaving the parts that have the most dependencies as alone as possible. Most of the workings for these parts should be pretty straightforward once the architecture's known. Write docs on what isn't.
Write a user manual.
H. Lally Singh
Sunday, November 2, 2003
Whatever happened to
Step 1: Think up a brilliant idea.
Step 3: Profit!
Monday, November 3, 2003
"until what I've written is completed, it doesn't implement the (entire) protocol and therefore there's no sense in trying to run it."
You can always run and test the subset. It has to do something, right?
Thomas Eyde
Monday, November 3, 2003
.. reminds me of a project manager I had once. We were committed to delivering on a certain date but I knew the software was full of bugs.
His advice was "just install the software and get the hell out of there !".
Ben
Monday, November 3, 2003
Dear H. Lally Singh:
You sound like an overeducated student. Go write some real code.
Puff Ball
Monday, November 3, 2003
Dear Puff Ball:
You sound like an undereducated code monkey. Go get an education.
Leonardo Herrera
Monday, November 3, 2003
Dear Leonardo Herrera:
I have a 12 Phd's myself, but the purely rotten spew coming from his mouth is making me sick. The steps presented are for writing code, not analyzing things. Analysis of a problem is a given, if you're not stupid that is.
Puff Ball
Monday, November 3, 2003
Dear Puff Ball and Leonardo Herrera:
Why don't you guys stop fighting and just kiss and make up instead?
Do it for the children.
The Peacemaker
Monday, November 3, 2003
1) Write code
2) Compile same
3) Fix bugs as users find them
4) ?????
5) Profit!!!!
Hell, it works for Microsoft.
Snotnose
Monday, November 3, 2003
Snotnose, your logic is impeccable. Well played, old man, well played!
Booger
Tuesday, November 4, 2003
> You can always run and test the subset. It has to do something, right?
I can but don't want to. Usually, when adding to existing software, I do incremental development with incremental testing; in this case I decided against it and am doing "big bang" integration testing after coding is finished. I'm wiriting a protocol converter that acts as a bridge between two protocols. If I wanted to test support for one protocol before the other were finished, then I'd need to write application-layer logic to drive the protocol (instead of the first protocol being driven by the second). Sometimes it's annoying to run a test and find that it doesn't work, only to tihink "well of course it doesn't work: that's because it hasn't been written/completed yet". Earlier testing might be a good 'risk mitigation' strategy, helping to demonstrate proof-of-concept or something, but in this case ... maybe I should have said "won't" rather than "can't" run it till the end.
Christopher Wells
Wednesday, November 5, 2003
1. Devise the scheme of classes.
2. Write the code, gradually discovering just how many methods are needed to get to the 'private' variables.
3. It compiles. At this time you realize n out of m classes weren't needed.
4. "Better luck next time."
Alex
Wednesday, November 5, 2003
Recent Topics
Fog Creek Home
|