Fog Creek Software
Discussion Board

How do I break the procedural habit?


I have been programming for about 15 years professionally and have been mainly using 4GL's, clipper, foxpro and xbase++.

I am now using C# but I still feel like I am coding apps with a procedural mind set.

It is made more difficult by the fact that I am the only programmer at the company, therefore I have no one to bounce ideas off.

I understand objects in terms of how they work, but I wouldn't neccessarily recognise when to create my own.

It is made more difficult by the fact that they most books on the subject always use abstract examples, rather than a proper application.

Am I overcomplicating the use of objects or are they mainly used for custom controls, rather that business rules of an application?

I am plagued by continuous thoughts that I am missing something and that I could be  coding things in the correct way and with less effort.

Have any of you been in the same situation and how did you resolve it?

Mike grace
Wednesday, February 18, 2004

I always start out procedurally, then if I notice a trend in passing certain variables to certain functions I'll refactor that set into a class.  Anywhere where I would've used a struct in C becomes a class too. 

I know I'll take some heat for this, but I wouldn't really worry about it.  Procedural programming still makes a lot of sense.  Sometimes people tend to over-architect things.

Dan Brown
Wednesday, February 18, 2004

For the most part, a group of similar functions would warrant class membership if they represent something "animate" enough that stands for itself. For instance, a Connection (or call it whatever) object, you might have to handle chores such as:

(1) Connecting to a data source
(2) Retreiving resultsets
(3) Executing Queries
(4) Reflecting the state of a connection at any point in time etc.

I make objects only if they're "animate" enough. For instance, when creating winsock applications, I use classes to represent messages that I need to send and recieve. There's a message that I can see as animate matter, so a message object I can justify, and also its anatomy: a message header, a message body, which inturn may have more objects.

You'll have an object wherever you can have a data structure stand out for itself. For instance, if you were building an application to monitor an electronic switch board or some other gadget, you'd first picture in your mind its anatomy and the relevance of the details of its peripherals in your scheme of coding. If you have to control to the tiniest of its apparatus, you'd have a Switch object which contains a Board object, which inturn contains a Lines (collection) obect, a bundle of Line objects. Lines might have types (input or output, RX or TX) and so on.

On the other hand, if some of the "animate" entities you are dealing with in a project are not that relevant data structures in your application, it might just not make sense to objectize (nonce word) them. Which is why, I understand your quandry when you see books having a "Customer" object with a method like "Save", and a convinient call like:


and volia! the customer records are saved into the database. Bullshit!

I'd end it by saying, you'd be helping yourself by "thinking" clear. Try to think harder. When it doesn't come to you, stop for sometime so you can go back to thinking later. Then pen down the nouns in may be a paragraph form and choose those of the nouns that are relevant and have major data responsibilities/roles in the project.

Sathyaish Chakravarthy
Wednesday, February 18, 2004

I have been working on it for the past 8 years... :)

As someone who has over-architected things on (a lot) more than one occasion, I second Dan's advice.

However, if you have the time for it, choose one of the C# apps you have finished, take it home, and work on it as if you had all the time in the world. Create classes to your heart's content, according to what you think is correct.

Then, when you get an assignment for a new version, see how your "home" version can cope with it - after you've finished the project at work, of course :)

"Suravye ninto manshima taishite (Peace favor your sword)" (Shienaran salute)
"Life is a dream from which we all must wake before we can dream again" (Amys, Aiel Wise One)

Paulo Caetano
Wednesday, February 18, 2004

When in doubt, think in terms of Windows data structures. A window is an object, a window's device context may be thought of as another object, and then try and wrap some windows functions to the objects you classify. For instance, open Win32 reference by category and you'll see sections such as Pen, Color, Window, Window Properties, Graphics etc. All of those categories could be objectized, or even ramified into more narrower categories of objects. Try doing this with your application (...and you'll end up creating another build of MFC, joke).

If you are drawing a graphic app, try thinking of those glyphs or whatever objects you are drawing as objects; lines, circles, ellipses, parabola, hyperbolas, warts and all. Think of their properties, and the boundary lines are clear. They are all distinct "objects" with properties such as their coordinates, radii and such. If you can represent them as data structures, you can objectize them adding functions that work on them into the class.

Sathyaish Chakravarthy
Wednesday, February 18, 2004

I re-iterate, the key is in orientation of thought. Try "thinking" about the components or rather a better word would be "things" that your application needs. And then what "relationships" they have, or how do they interact with each other.

Another example, one horizontal object I described above, the Connection object, a benign sobriquet for a data source, can be reused in many of your applications. Assume you were to create a scheduler application that let the user set reminders (like in cell phones). When the user had to be reminded, your application made its reminders popup from the systray, like the MSN or Yahoo reminders.

You could drive brute force into coding this little app, but if you poised you mind, you'd unearth an elegant approach to solving this problem. You'd think of:

(1) What if a user set alarms for two reminders at the same time? How would two reminder windows share the screen real estate?

(2) What if some alerts scheduled for different times were to be repeated at a common time later because the user chose the "Remind Me Later in __ days/hours/weeks" option?

And the list could go on. Thinking on these issues might set you a different train of thought, the object oriented train. You'll realize there needs to be some object whose prime responsibility will be informing your application about the availability of the screen real estate. Share the responsibilities and you'll land up with a handful of animate elfs in your palms, and you'll need them to be objectized, so they can carry out the show.

For this same example, you'll need:

(1) Your good old connection object to read from the database for schedules. This object will also filter the reminders that are due to be alerted. It might then pass them to your application. Period. Limited responsibility, you see!
(2) A timer or some timer object to work in conjunction with your connection object.
(3) A Queue (data structure extended as class) to push the reminders due into, there may be more than one. They'll have to wait.
(4) A Form Designer object that is responsible for picking the FIFO alert out of the queue and creating a form. Mind you, only creating the form and not displaying it.
(5) A Controller object to watch over the screen real estate, when the current window spun out of the Form Designer mill comes back from the trip, the controller destroys him and summons the Form Designer for fresh cakes.


Sathyaish Chakravarthy
Wednesday, February 18, 2004

Don't sweat it.  OOP is overrated, overhyped, overdone, but sadly not over.

Wednesday, February 18, 2004

The dread associated with OOP may be noticeably trite and overdone, but OOP as such still is useful and makes a good deal of sense. Probably the OOP trepidition emanates from the overdoze of preaching that surrounds it, making it seem like an esoteric knowledge. More than physical knowledge of arcane class modifiers and access specifiers, I hold it is an orientation of thought.

Sathyaish Chakravarthy
Wednesday, February 18, 2004

To say OOP is overrated is sour grapes.  Don't believe such unqualified jeering.  Most top programmers will assure you that OOP is a very good approach, perhaps the best we now have.  The only dissent you'll hear at the top is from Lisp folks, who also have a really powerful approach.

Of course you can get caught up in the wrong choice of objects, etc.  As suggested above, having a Customer save itself is a poor design choice, and you'll see such things in many lower quality books.  (Let's remember again that with so many titles for sale, most programming books are garbage.)  That's primarily an issue of good taste.  You must develop such taste, it doesn't appear without serious effort and learning.  But hey, that's why programming is lucrative; not everyone can do it well and even the really smart people have to work to become good.

Alas, I have no shortcuts to offer, but I have two observations that might help.

(1) Be very suspicious of yourself whenever you choose to write what in C# will be called a "static method".  In time they will be an important color on your pallete, but they're also the biggest crutch to procedural thinkers.  Like some drugs and simple carbohydrates, the surest route to responsible use is to quit cold turkey, then after some time to reintroduce them carefully and in moderation.

(2) Buy and read Martin Fowler's Refactoring.  That book will help greatly.

Wednesday, February 18, 2004

You don't need to break the procedural habit -- not everything has to be a object.

One of the key things to understand about programming is that you should try and make the representation of concepts in your code fit as naturally to the actual concept as you can. Keep the representation as simple as makes sense, but no simpler (to paraphrase Einstein).

OO makes a great deal of sense when doing UI controls. It makes less sense representing an algorithm -- that's what functions/methods/procedures are for.

Read "Design Patterns" by Gamma, Helm, Johnson and Vlissides. Then immediately re-read it -- it didn't make sense to me the first time, and I know several others who agree. Then don't get carried away on the OO magic carpet -- it's a powerful tool in your toolbox, but it's not the only one.

C Rose
Wednesday, February 18, 2004

OOP is not over-rated.  For certain types of projects (a large percentage BTW) it is the best way to design and program.  A lot of people do not understand it.  To make themselves feel better about their intellectual shortcoming they claim it is useless and all those who use it are just deluding themselves.  I assure you this is not the case.

OOD just happens to be one of those things that most people do poorly.  If you can learn to do it well, you will be rewarded.

My advice to you is read books on OO.  Read examples of good OO code.  I learned a great deal about it by reading the available source coded for the Java libraries (I don't know if the same is available or as good for C#).  There is no substitute for looking at the work of better programmers and though I await the inevitable Java bashing, the Java libraries (though far from perfect) have excellent examples of good design.

I also recommend that you read the Design Patterns book and try to see the patterns in the Java (or C#) code.

Finally, there is no substitute for writing your own code.  Try to incorporate some of the basic design patterns in your project (where appropriate obviously, don't force it) and ideally make contact with a more experienced OO designer/coder to go over your code with (good luck on that last one).

name withheld out of cowardice
Wednesday, February 18, 2004

"...and volia! the customer..."

My doom is slated tonight! I must practice the art of chatting whence alone shall I master the science of typewriting. Yahoo...!


Sathyaish Chakravarthy
Wednesday, February 18, 2004

From somebody: "To make themselves feel better about their intellectual shortcoming they claim it is useless and all those who use it are just deluding themselves.  I assure you this is not the case."


Wednesday, February 18, 2004

Sathyaish, you okay there, buddy?

Wednesday, February 18, 2004

Remember the Object Police won't jump into your cube and force you to show them your source code with a gun to your head.

Try joining a .net user's group in your area. Make friends. Have discussions. Bring your problem to them in a humble way and they will most likely have fun with coming up with solutions. You should be getting one solution for each person you show this to. If anything, you will get some ideas you may not have thought of for constructing an object paradigm.

OOP needs mentoring!

Wednesday, February 18, 2004

1. If you have a bunch of functions sharing data (global variables, module level variables or some variable (in particular "current state" type variables) being passed around as a parameter to numerous functions), here's a hint:  They probably ought to be in a class

2. If you have some really long function that you refactor into several functions.  Think about how you'd best refactor, and if it came into multiple functions sharing data - goto step 1.

3. Function pointers? That is usually begging for a class. Consider qsort in C (just as an example).  It take a function pointer to a comparison function. 

Now think, if it took a pointer to an object derived from some common ABC.

Now you have a "comparing class"

But qsort also uses other attributes of the data, aside from the qsort algorithm itself, e.g. swap two elements

Change "comparing class" to "sort manipulating class", and the swap method could be in there.

And so on.

You have just generalized qsort to work with any data type

4. A lot of OOP books concentrate on "data" type objects in their explanations, e.g. horrible shapes examples proliferate.  Sometimes an object associates with a particular activity, e.g. the example in step 3, a file parsing object, a data sorting object, etc.

S. Tanna
Wednesday, February 18, 2004

Thanks a lot guys. I feel like a great weight has been taken off my shoulders which will probably result in a new lease of life and mindset.

Working alone, it does always seem like everyone else knows much more than me as if it is a big secret.

I am going to print this thread out and stick it up on the wall to keep referring to also buy the books recommended and do all the learning I can.

It is especially good to hear that a good app is a mixture of both styles and that I suppose the bottom line is to create programs that are easily maintainable and do the intended job.

I am probably not as bad as I first thought.

I also take the point of trying to find a .net group or at least some other programmers to talk to. None of my friends nor anyone else in the office is a programmer so at times it is a fairly solitary existance.

Mike grace
Wednesday, February 18, 2004

Oops! (No pun intended) :-). That last post makes me sound like an overenthusiastic newbie.

Mike grace
Wednesday, February 18, 2004

Electroshock may work.

If not you might try responsibility driven design.
A way of expressing such a design is OO.
Responsibilities automatically subsume a lot
of the various principles in one easier idea.

son of parnas
Wednesday, February 18, 2004

Do what I usually do when facing a new language or concept:

Rip off other people's work.  Tweak it to fit what you need.

Then rip off that to tweak for the new stuff.

Sooner or later, you will be capable of thinking in the new concept enough to start from scratch.

Wednesday, February 18, 2004

Is OO overrated? I like what Paul Graham has to say in The Hundred-Year Language ( ):

Somehow the idea of reusability got attached to object-oriented programming in the 1980s, and no amount of evidence to the contrary seems to be able to shake it free. But although some object-oriented software is reusable, what makes it reusable is its bottom-upness, not its object-orientedness. Consider libraries: they're reusable because they're language, whether they're written in an object-oriented style or not.

I don't predict the demise of object-oriented programming, by the way. Though I don't think it has much to offer good programmers, except in certain specialized domains, it is irresistible to large organizations. Object-oriented programming offers a sustainable way to write spaghetti code. It lets you accrete programs as a series of patches. Large organizations always tend to develop software this way, and I expect this to be as true in a hundred years as it is today.

Ken Dyck
Wednesday, February 18, 2004

"Somehow the idea of reusability got attached to object-oriented programming in the 1980s, and no amount of evidence to the contrary seems to be able to shake it free."

The original tenants of OOP are advertised for reusability.  Think of Inheritence; it's all about taking an existing component and subclassing it to produce something new -- thus reusing the original code.

Of course, in practice that really doesn't work unless the original component was designed to reused.  In which case, it's no different from building a library (which is also designed to be reused).

The main advantage of OOP is that it makes things even more modular/componentalized than with procedural programming.  If I'm building a program in OOP style and it connects to a database then I might build a Connection object, a Resultset object, and a Query object.  Now all the sudden, pretty much without trying, I have something that's fairly easily reusable.

Almost Anonymous
Wednesday, February 18, 2004

Yet Common Lisp supports OOP.

Wednesday, February 18, 2004

I'd like to second the recommendation for Design Patterns. Read this book - you'll get a lot of good ideas on how objects interact beyond the "put all the common data into a class" idea.

Polymorphism is the most important part of OO programming; it's what really distinguishes it from other approaches in my mind. It's also, however, hard to get at first. Design Patterns has lots of ways to take advantage of this feature that are not obvious to most people (Visitor was a real suprise to me the first time I read it).

Definately worth reading.

Chris Tavares
Wednesday, February 18, 2004


By an astonishing coincidence, I'm giving a presentation this evening... which, among other things, discusses ways to begin gently introducing OOP here and there into an otherwise procedural system, using an old project (in which switching from procedures to a different model using classes let me dump 80% of my code) as an example.

Email me and I'll send you a link once I post the slides.

Sam Livingston-Gray
Wednesday, February 18, 2004


I asserting that polymorphism is the most important part of OOD I fear you will hurt the feeling of the other parts.  Seriously though I have heard people argue quite passionately on the important aspects of OO and some people come up with some weird ideas. One argued with me for a week that PERL was the most object oriented of languages because "original" OO didn't include inheritence of members (or methods, now I can't remember).

My point of view is that if there is a "most important part" it is that in modeling your software objects should approximate the structure of your real world and business concept objects.  Does your application need a concept of patients?  Make a patient object.  Does a patient have one or more doctors?  create an association.  Does a patient have a first name?  Yes, does any other participant in the system need a first name?  Yes a doctor does.  What is the common superclass between patients and doctors?  They are both people.  Now is patient or doctor a subclass of person?  Maybe, but can't a person be both a patient and a doctor?  Well maybe this calls for a composition rather than inheritence.  and so on.

I find that this recapitulation of the "real problem space" makes programming larger systems a breeze.  I do not find it, as one poster put it, as a way to maintain a lot of spaghetti code.  In, fact, if one does it correctly, it allows him to code some short term spaghetti to solve an immediate problem, and later come back and refactor it nice and purty....

name withheld out of cowardice
Wednesday, February 18, 2004

Write a paragraph describing your product in the customer's perspective. The underline the nouns and circles the verbs. The nouns are your objects and the verbs their methods. :-)

Wednesday, February 18, 2004

Properly engineering your code means reducing the
dependencies of interactions in your design. You aim
to be able to trivially prove to yourself that your
program performs according to specifications, returning
the desired results correctly, securely, with acceptable
performance and using an acceptable amount of
resources such as memory.

To do this, your program needs to have the properties
of Modularity, Encapsulation and Abstraction. Any
real-world programming environment has a module
system that supports these, and so you can achieve
these results fine with procedural programming.

You do not need object programming to achieve these
and you should not feel guilty for not using objects.

In fact, what really seperates object programming from
other styles is Inheritance, which achieves the opposite
effect: it allows you to produce lots of dependencies,
then hides them from you, so you screw up your design.
An example is if you have a bug in your base class and
build an object hierarchy that depends on that bug, then
realize you have to remove it. Then you are shafted.

In short, object oriented programming is poor engineering.

Polymorphism is not a charecteristic of object programming
because it doesn't work properly in any object languages
(which is why they had to invent "Generics", which is less
broken). This is a feature of functional languages such as
Lisp, Miranda or OCaML. If you need it, do it properly.

Wednesday, February 18, 2004

A, that's an interesting view, but I disagree that object oriented programming is problematic.

I happen to share your disdain for inheritance and generally don't use it, but apart from that, my work is strongly object oriented, and it's also very good.

Me and the view out the window
Wednesday, February 18, 2004


I would agree that polymorphism and static typing (as done in C++, Java, et. al) doesn't work properly, and yes, generics are in some ways a hack to get around this problem.

However, if you use a dynamically typed language (Smalltalk being the ultimate example, but Python works this way too) the problems just go away. Polymorphism just works.

Chris Tavares
Wednesday, February 18, 2004

>I happen to share your disdain for inheritance

Yah, those type safe protocols betwtween objects
sure suck. Wouldn't want those.

son of parnas
Wednesday, February 18, 2004

I'd recommend reading this book:

Design Patterns Explained

It's an entry-level object-oriented design patterns book. It will help you understand why and how you should be using objects.

I really enjoyed it. the author actually uses this book to teach classes in OO Design. It's also a great stepping stone to the more advanced design pattern book,

Design Patterns, Elements of Reusable Object-Oriented Software

... Yes, I will receive a few pennies if you click those links and buy those books from If you want to see more ways you can line my pockets with spare change go here: ... Has there been an edict establish as to the posting of links such as these?

Michael Sica (
Wednesday, February 18, 2004

But then, why all OO tools that I've tried force you to
interleave your own code with the code generated by
their wizards and alike?

To me, that's worse than anything you can imagine
about programming concepts. And that's not even all.
If you try to change something in their ugly code
(like replacing block of code that are repeated for all
the widgets with simple methods), these stupid wizards
would get confused and yell at you.

Wednesday, February 18, 2004

Wizards suck. They're typically misdesigned. The generated code should go into a base class, then the user derives from it to provide the behavior (the generation gap pattern).

I remember when VS.NET was in beta, and the Winforms team was asked why the designer code was put into the class the way it was. Their answer was: if we did it differently most of our users would be confused, since that's how the MFC wizards work.

Argh, gotta hate the chains of history!

Chris Tavares
Wednesday, February 18, 2004

Hmmm... A.  At least it looks like you've thought about this a bit more than some OO naysayers.  But...

Good programming does indeed involve reducing dependencies and interactions, but as the Einstein quote above counsels, not more than is prudent, or more to the point, not at the expense of all else.  Yet A, you weaken your own argument when you trot out a force that OO has more tools to help with such as dependency reduction.  Abstract types jump to mind.

Many non-OO programming systems do indeed provide mechanisms to support some amount of modularity, encapsulation, and abstraction.  But good OO language systems provide a richer palette for each of these than procedurally oriented systems.  Yes you can achieve these goals and more using a non-OO language, likewise using assembler, but you have to work harder to do so without the language support.  Good OO languages provide an unusually rich and cohesive set of mechanisms for all these goals.

Your OO critique seems to boil down to the fact that inheritence *allows* you to produce flavors of dependency that procedural languages don't allow, and that if you fail to pay attention to these dependencies, you invite pain.  Gross failure to pay attention will lead to no good no matter what your paradigm.  That the richer approach gives you more bullets to shoot yourself with if you tend to shoot yourself is not a particularly damning indictment.  Inheritence is a powerful tool to explicitly define similarity and differentiation, but like all tools you must understand how, when, and why to use it.

The postscript has a too much hand-waving to yet address.  If you care to elaborate, we can discuss your polymorphism concerns.

The worst condemnation of OO I can imagine is that there exists no single source to learn it well.  Meyer's OOSC goes a long way, and yet too far in some directions.  Fowler's Refactoring has some surprising nuggets even in the introduction, and the wonderful quality that its advice can incrementally improve your skills, rather than requiring a full read or two before anything happens.  Design Patterns is excellent but, as said above, the payoff takes several readings, and it is best digested having the proper enzymes of a solid OO mindset.

Wednesday, February 18, 2004

I didn't much care for Shalloway's book, and rather distrusted his expertise.  I wish I could elaborate but it's been some time and I'm left with only the aftertaste.

The Design Patterns book needs no watering down.  It's very lucid if you have the prerequisite knowledge of OO.  Some of the patterns are subtle enough that a poor explanation could do enduring harm.  I recommend you learn OO well, then approach the Gang of Four's pattern catalog first from their own book: Design Patterns.  Shalloway did bring in some useful references to related work, so it might be worth a look *after* you grok the GoF patterns.

Wednesday, February 18, 2004

I've spent some time recently having to debug someone's code who is no longer with our company.  The code is object oriented, and designed by someone who appears to have read design patterns, but probably not more than once, if you know what I mean.

I notice a real problem with OOP when written badly enough (and not even horribly badly, just badly.)  Someone says "this report prints X when it should print Y".  I go look where the report was generated and see that X came out of some objects member variable.  Great, now I have to look for every piece of code that could have changed that member variable.  It is like a tiny little global that any object that is even married to a first cousin of my object could have changed.

OOP loves to disguise globals as members.  Beware, beware, beware of the "document" object if your system has one. It's definitely critical to keep objects small, and dependencies very low in object oriented project (especially big ones).  Dependencies need not only be low, but also shallow--you have to worry about your parent-objects' dependencies.

Well, I haven't programmed procedurally in a long time, so maybe the problems that OOP has are even worse procedurally, or just common to all large projects... Or maybe not.

Keith Wright
Wednesday, February 18, 2004

I have an uncle who still writes x86 assembler and laughs at the poor fools caught up in that silly "structured programming" fad.

My dad spent years writing Fortran, and when he finally got a C compiler, he wrote long programs entirely in main() with gotos for control flow. He, too, thinks structured programming is a passing fad.

They haven't mastered it, so they don't understand what's good about it. If I tell 'em they need to *understand* it before they can evaluate it, they say something like "yeah, nobody who's not a convert is allowed to offer an opinion, is that what you're saying? That's the way Stalin ran things!"

They're right, of course. That is *precisely* the way Stalin ran things.

Never mind OOP; procedural programming itself is an abusive, hierarchical, morally and philosophically bankrupt boondoggle. It forces code into arbitrary contortions which simply aren't necessary, so at some point you always end up using gotos anyway (or exceptions, the Politically Correct OOP method of control flow -- which turn out to be nothing but gotos with a lot of gratuitous runtime overhead).

When procedural programming is finally rooted out, OOP -- and all the bloated lampreys and pilot fish clinging to OOPs belly (GUIs, HTML, XML, TCP/IP, etc. ad nauseam) -- will naturally follow it into oblivion and we can go back to writing software for the benefit of the CUSTOMER, not our human resources departments.

The decentralized, Ghandian clarity and inherent social
justice of assembly language will save us, when we finally have the humility to ask.

Wednesday, February 18, 2004

Regarding the observations of Keith Wright, I'd say you're on to something... Google "Law of Demeter".

On global data... good OO rather tends to not have it except for constants, static lookup tables, etc.  Makes you wonder whether that's why it's marked "static" in some OO languages, huh?

Wednesday, February 18, 2004

Object oriented languages are multipliers of the developer's skill, good or bad.  A good OO design brings more benefits than a good procedural design, while a bad OO design is more dangerous than a bad procedural design.

It's kind of like a chainsaw.  Somebody skilled with one can cut wood faster than if they had a regular saw; but an unskilled chainsaw wielder will cause destruction.

T. Norman
Wednesday, February 18, 2004

> I have an uncle who still writes x86 assembler and laughs at the poor fools caught up in that silly "structured programming" fad.

Somebody said you can't do structured programming in x86?

Assuming you're allowed to use a macro assembler, I don't think that is remotely correct.

OO in x86?

Not entirely sure, but it seems plausible with a macro assembler, a fairly lose definition of OO, and a programmer with too much time on their hands looking for a challenge.

S. Tanna
Wednesday, February 18, 2004


T Norman, that as one of the best quotes i've ever seen on this message board.  I'll keep that in mind.  I agree 100%.

Wednesday, February 18, 2004

"""I go look where the report was generated and see that X came out of some objects member variable.  Great, now I have to look for every piece of code that could have changed that member variable"""

That sounds rather like data structures masquerading as objects, possibly mixed with excessive mutability.  There should be only *one* place that member could've been changed, and it should be in the object's class.  Granted, that might be in response to calling some method on the object, but then at least you could search for calls to that method.

Anyway, the problem you're describing is a failure to do proper abstraction (or follow the DRY principle); the OO aspect of it is irrelevant.  If this were procedural code, it'd be just as broken.

Perhaps you meant to say that "people with poor abstraction skills who use OO languages often disguise globals as member variables"?

Phillip J. Eby
Wednesday, February 18, 2004

You people are frightening me.

This gigantic OOP monster who's running around and doing terrible things to innocent source code sounds incredibly nasty. I'm worried it might hurt me while it's forcing me to write unstructured code and use global variables. (I should go back to C where global variables can only be accessed from obvious locations and not from anywhere in the program.)

I've never actually seen the OOP creature, though. I wonder if it's disguising itself as an actual programmer who just isn't very good?

I'm going to hide my head under the blanets with you now. That'll make me feel safe.

The Real Blank
Wednesday, February 18, 2004

Interesting thread.

I only recently rediscovered programming, and for all intents and purposes I've only ever programmed in OO.

One thing I've noticed though, whenever I 'hack' together a concept, it always starts procedurally, and then gets refactored into Objects as time goes on.  Refactoring tools make it relatively painless to logically move code around, and from the methods Objects begin to emerge relatively quickly.  Code that doesn't belong in an object sticks out like a sore thumb when you've got short, concise methods.

Of course in some cases I've had to scrap the whole design up front as fundamentally flawed, and regretted not doing more up front design.  But I can count on one hand how many times that's happened, and it's usually obvious to me within 2-3 days of coding when that happens.

Wednesday, February 18, 2004


The key point is when it is written badly and it sounds as if that programmer wrote badly, indeed.  I don't see why this same thing couldn't happen with procedural programming.  In fact it seems to me it could be even harder to track down.  It's been years since I messed with C or Pascal so maybe I'm wrong.

Properly done, OO should prevent this sort of thing.

BTW, I think another good aspect of OO for large projects is that it can make it easier for good and less good programmers to work together. 

name withheld out of cowardice
Wednesday, February 18, 2004

Software development is complex, and always will be. Ultimately, new programming languages and methodologies are just attempts to better manage the complexity. You can encapsulate it, sub-divide it, and hide it, but the complexity is still there. Leaky abstractions galore.

15 years from now, when a build of the Microsoft OS is 500 million LOC instead of 50 million, I wonder how we'll manage the complexity? C# and OOP? Oh boy, can't wait for that.

Wednesday, February 18, 2004

The Real Blank, apparently the only thing separating
programmers from excellent code is the OOP monster.
Nice example of cryptozoology.

son of parnas
Wednesday, February 18, 2004

After way too many years of OO

I have come to the conclusion that much of the benefit of OO is you know where to put stuff.

Consider the example of

Hospital - Patients - Patient

In OO, the stuff that applies to an individual patient goes in the Patient class. The stuff that applies to a bunch of patients goes in the Patients class, possibly calling individual Patient objects, and so on.

I do not think there is such a clear distinction in structured programming, as it ain't a big deal to have a single function which iterates all the patients and does something to each of them.

The same logic could also be applied in case of inheritance

Knowing where to put stuff, is much under-rated in terms of code readability (you know where to look for stuff too and can be sure it's all there), in terms of avoiding duplicated code (leading to bugs and various forms of waste), and in terms of avoiding spaghettification over time as a program is updated by multiple people over multiple releases.

And yes I realize there are other benefits from OO

But don't underrate "knowing where to put stuff"

S. Tanna
Wednesday, February 18, 2004

Disclaimer:  I am not doing advocacy and don't give a damn about how you program or what you use. ;)

My path to learning OOP was choosing the right learning language.  I'd done Java and felt OOP was enforced so much that I actually didn't know how to seriously do it.  Part of knowing something is understanding when not to use it, and with Java you don't get the chance to not use it.  I thought Python was more loose and agnostic, so I could try out other styles.  However, Python makes the tradeoff of "There's preferably one way to do it," so that was doomed as I felt a push to OOP early on in design.  Finally I hit upon Common Lisp as an agnostic language that allowed me to choose when I wanted to try out OOP.  I'm still evolving, but I have a definite sense of what I want to do with OOP and when I want it.  That pays back even with other languages, since I have a good sense of when they're overloading their OOP system to not just do data abstraction, but things like syntactic abstraction too.

I don't think OOP is necessary.  But it is a resource.  Some languages like Java make it a burden, as other features in the language are so weak that one simply must use OOP.

I wish I could offer something other than learning a new language.  Maybe Bertrand Meyer's book is good; people recommend it though I've only skimmed it.  The caveat I always hear is he's biased and wordy, but they still seem to recommended in spite of that.

Tayssir John Gabbour
Thursday, February 19, 2004

I've heard stuff like "...with Java you don't get the chance to not use it..." a lot, and I just don't see it.  What prevents you from defining a single class, called "Program" perhaps, and writing all of your code inside it?  Is this not effectively what is done when authoring a C/Pascal program?  Inner classes on Program can be effective replacements for structs, so what's missing?

As far as OOP goes, I agree with what T. Norman and a few others have stated above.  If OO seems to be getting in the way of whatever you are trying to do, then you are probably not using it correctly (with a few exceptions). 

As to Bertrand Meyer's book, he is wordy, but he is also precise and clear. He does proselytize for Eiffel, but he also believes the language provides the clearest expression of the concepts he discusses.  At the end of the day, if you want to use OO-anything to maximum advantage, then ignore his advice at your peril. 

Thursday, February 19, 2004

My opinion (and all this is opinion) is that it's "bad Java" when you choose to write procedurally.  People will get on message boards and complain about how you dumped all that code into one uber-class.  I do think that Python and lisp give you more ability to write a solid program without demanding as much from your style as Java does.

But it's all a matter of tradeoffs.  If you look at Java's context and goals, it's immensely successful; the mainstream now demands GC, sourcecode, "batteries included"...  And a lot of people don't care about "finding their style."  Java is a pleasant environment and I can imagine choosing it over Python/lisp depending on the task.  But I prefer coming to it from experience, instead of having it as my learning, experimental language.  Emacs generates most of my Java at the moment. ;)

Tayssir John Gabbour
Thursday, February 19, 2004

It is very interesting to see all the comments center around OO.

However, when someone is changing from a procedure type system to a modern event driven system, often the OO is not the FIRST mind set that one has to change.

The first issue is one of procedural program flow VS event driven code.

There is 51 posts so far in this thread, and not one has identified that the major change in learning a new system from older systems like xbase etc was not the introduction of OO, but the fact that your code does NOT flow according to programmers wishes anymore.

51 posts...and no takers on this issue!

Does anyone here remember these older procedural systems? (I would guess not!). The issue is not OO, but the issue is one of event driven code vs non event driven code. That is the big change. I remember this change well my self.

Companies like Microsoft also identified this change to events as a problem area for newdevelopers.

In a old system like xbase (or just about any pre-windows development system) when code was written, that code would run, then WAIT for user input. Generally, this means that your startup, or main code was a launching point to other parts of the system, but generally the developer would control, and “think” out the program flow.

Of course today, your code is not sitting waiting for user input. What happens is you click on a button and some EVENT RUNS some code. In other words, your main code is not really needed anymore, and further that code does NOT wait for the user input in general.

The result of this change is that now you have a whole bunch of little piecies of code that run instead of one large routine. That is the first change one must go through when changing from a procedure type system to a event driven system.

How can we have 51 posts on this subject, but no one bothers to mention this important change?

Without question, now that we have event driven programming, then all those little pieces do start to favor things like separating code out to little objects and things. However, it is the event driven nature that started this whole process.

Here is a great little quote from the access97 manual (which by the way was still courting those older xBase and Foxpro who were NOT used to Event driven programming).

This is chapter 1, right near the start:


Event-Driven vs. Traditional Programming

In a traditional procedural program, the application rather than an event controls the portions of code that are run. It begins with the first line of code and follows a defined pathway through the application, calling procedures as needed.

In event-driven applications, a user action or system event runs an event procedure. Thus, the order in which your code is run depends on the order in which events occur; the order in which events occur is determined by the user's actions. This is the essence of graphical user interfaces and event-driven programming: The user is in charge, and your code responds accordingly.

Because you can't predict what the user will do, your code must make a few assumptions about "the state of the world" when it runs. It is important that you either test these assumptions before running your code or try to structure your application so that the assumptions are always valid.


I only wanted to point out here that change from a procedural type system to a modern system entails changing how the program is viewed, and this view comes BEFORE the issues of OO even should be brought up.

I have to conclude that I also been doing this 15+ years, and thus I guess  most posters here forget what procedure code was like before event driven code (or, I am just too old now!).

And, to the original poster:

I came from a procedural type environment before I made the switch to windows. (in fact, I did a good stint with FoxPro).

So, here is some notes on when, and why I would use class objects in ms-access. Since the example is for ms-access, then your database point of view will likely will find the following article of interest:

  Using Class objects with ms-access.
  By Albert D. Kallal
  Tuesday, September 16, 2003
  Why would I want to use a class object in ms-access?

Albert D. Kallal
Edmonton, Alberta Canada

Albert D. Kallal
Friday, February 20, 2004

Hi Albert,

I'm not sure I agree with your thesis that event driven coding is such a big leap from older, self-contained, procedural style coding.  I remember when a lot of work began to shift over, and I don't recall it being that big a deal for either myself or the folks I was working with at that time.  I do remember, though, that magazine articles and gurus were touting it as something fundamental.

As evidence for my point of view, I remember a fellow I was working with in those days describing the situation as writing a function library, but using someone else's main loop.  Since we'd all written numerous function libraries this was all very familiar stuff, and certainly the code under the "function library interface" was the same plain old procedural code we'd always been writing.

It seems to me that the biggest problems we faced moving into event-driven programs was in sending output to the highly structured GUIs that usually came with the event driven programs.  Earlier, we used to just write line after line of output to files or screens. 

When GUIs came along, this got to be quite a bit trickier.  It seems natural (now) that one attack on this problem would be to "objectify" the GUI, making it easier to route the results of computations to the correct places on the screen and to ensure the correct presentation of those results.  Couple this to the need to abstract a bunch of different kinds of input where there used to be only streams of characters, and I can see why objects gained a foothold in the user interface early on.

So, anyhow, I guess what I am saying is that I don't think it was the event driven stuff that drove us to objects at that time; I think it was the need to support much richer input and output "streams" that did it.  It just so happened that these things happened concurrently, so what seems like cause and effect was really just correlation.

then again, I may be mistaken
Friday, February 20, 2004

Thanks Albert,

I'll give it a read.

Mike grace
Friday, February 20, 2004

Incidentally, you might like Timothy Budd's _Multiparadigm Programming in Leda_.  Leda's a language with semicolons 'n stuff that a C# programmer might probably love.  He talks about OOP as being about communicating little machines with limited memory that do computations.

Nice thing about event-driven stuff is if you're using a gui, you'll probably be forced to use and learn about them...  Though IIRC Trolltech's QT used functions for events in C, with use of a preprocessor.  Actually worked.  In C#, you might use delegates (which I suppose are the same thing as function references?) for the same thing.

Tayssir John Gabbour
Friday, February 20, 2004

*  Recent Topics

*  Fog Creek Home