Fog Creek Software
Discussion Board




Is Inheritance a Pillar of OO?

I've had an interesting discussion (argument) at work over whether or not inheritance is possible in VB.  He sent me this link and showed me something similar:

http://www.larrymusa.com/vboop.asp

The approach is using the ability of VB to implement an Interface.  I argued that this is composition, not inheritance. 

Composition is considered preferable to Inheritance.  I've read this advice in both GOF and Effective Java, so calling it Inheritance rather than composition is doing VB a disservice.

http://c2.com/cgi/wiki?CompositionInsteadOfInheritance
http://saturn.cs.unp.ac.za/~peterw/objects1/inheritance/tsld007.htm

Of course, its important to call it Inheritance because Inheritance is one of the 3 pillars of OO, along with Encapsulation and Polymorphism.  Inheritance is the oft-quoted reason for dismissing VB6 as a true OO language.

Thing is that Inheritance is one of the most abused concepts of OO.

http://medialab.di.unipi.it/web/IUM/Programmazione/OO/what/abuse.html

As I say, it is now generally accepted that Composition is Preferable to Inheritance, and yet Inheritance is still one of the first things taught when introducing OO to students.  In my experience the proper use of interfaces is given very little time.  Instead, they are made to look like a poor substitute for multiple inheritance.

When Inheritance was introduced as a pillar of OO was that a mistake for which we are now paying for?  Should those three pillars of OO be Abstraction, Polymorphism and Encapsulation with Inheritance being delegated to being just one way of achieving abstraction?
   
When the designers of VB decided to allow Composition but not Inheritance were they actually doing a smart thing?

Ged Byrne
Wednesday, March 12, 2003

Inheritance still has an important place in the OOP practitioner's toolkit, as long as it's ensured that subclasses are always going to be logical subtypes i.e. they follow the "is-a" rule.

John Topley
Wednesday, March 12, 2003

I think those books say 'favor' composition over inheritance which is more for when it isn't clear which is better... I would guess the biggest problem is when people finally get used to it, they 'inherit' the color of their socks and use that to hang their curtains (in my case anyway : ).

Inheritance certainly has its central place imo, just you have to be aware what all you are committing to when you allow it. As to whether a language is oo without it (or without encapsulation), or 'what is a pillar', its just semantics. Certainly a language is without one of the big benefits of oo if its missing any of these types of things - that doesn't mean it isn't well suited to its core task either.

Robin Debreuil
Wednesday, March 12, 2003


VB 6.0 is object-based.

Object-Based pretty much means "everything but inheritance" (maybe not operator overloading either. :-)

VB.Net is object-oriented. :-)

Yes, inheritance is a pillar of OO.  Books that claim to teach OO in VB (and there are some) are probablly figuring the 80/20 rule - but, sadly, they may leave people with a false impression of what OO truely is.

I'm currently taking CS 611 (software engineering) - we just got to OO Design.  The professor basically said that OO means UML, rational rose, and use cases.  Ugh.  (It's a theory course, not a coding course.)  I feel bad for the students that don't have a coding background that now think that's what OO is. :-) 

regards,

Matt H.
Wednesday, March 12, 2003

In my (limited) experience, nobody at a university has any clue as to what OOP really is about. In one "advanced" Java class, the professor once said that he had never been able to find a purpose for inheritance and that it would be best not to use it...

Frederik Slijkerman
Wednesday, March 12, 2003

then Fred, you were very unlucky at university.

Inheritance is very useful for specialisation etc.  Visual components come to mind as a good example of where inheritance beats interfaces hands-down.

will
Wednesday, March 12, 2003

• Is VB 6.0 object based? Yes.
• Does VB 6.0 allow inheritance of interfaces? Yes.
• Does VB 6.0 allow inheritance of implementation? No.

More interesting, is how many VB 6.0 developers actually use these features, because in my professional experience, it's not many.

John Topley
Wednesday, March 12, 2003

Whether it is or it isn't is not the important thing to me.  The real problem is the number of ex-procedural programmers I meet (and whose code I have to debug) who think they know OO because they've figured out inheritance. 

I can't believe it when I see some of the convoluted, monstrously huge classes, with hundreds of methods, where your never sure which of the tens of ancestors it was introduced in. 

In it's place, inheritance is fantastic.  But I wish the OO "teachers" would stop emphasising it. 

Rahoul Baruah
Wednesday, March 12, 2003

If you can stomach it read something like http://www.cas.mcmaster.ca/~emil/publications/fragile/

I think the main problem with implementatiion inheritance is that it is very seductive to use since it gives you so much result for such a small effort to start with, while at the same time it binds you into a very broad and very ill defined longterm contract.
It is like giving live handgranades to to a bunch of preschoolers where the safetypins are popsicles.

Just me (Sir to you)
Wednesday, March 12, 2003

In the big picture it is all about commonality & variability.

Deal with commonality via normalization - inheritance is one mechanism of normalization.


Deal with variability via indirection - polymorphism/dynamic despatch is one mechanism of indirection.

Indirection and normalization look different at different scales

Karel
Wednesday, March 12, 2003

Karel,

Your writing is terse but I will try it anyway.
In real world systems many things initialy fixed become "variable" over time. Somtimes this includes things that you have normalized.
Service chains that make liberal use of interfaces, delegation and aggregation, while initialy more work, seem to be a more flexible way of dealing with the changes in "variability" over time than normalization under the form of inheritance as implemented in many contemporary programming environments.

Just me (Sir to you)
Wednesday, March 12, 2003

I think big pictures are best described with small words <ducking!> ; )

Robin Debreuil
Wednesday, March 12, 2003

I'm certainly not questioning the _usefulness_ of Inheritance, but rather the _importance_.

The way OO is presented now inheritance is one of the first concepts introduced.  As Rahoul and Sir say, this is not the right place for it.  It really belongs in the advanced classes.

What should be introduced from the start is the idea of Abstraction, and how it can be achieved.

Ged Byrne
Wednesday, March 12, 2003

Thinking on it, the problem is not just with the teaching, but also the language desing.

In Effective Java Blochs 15th item is 'Design and document for inheritance or else prohibit it.'  http://java.sun.com/docs/books/effective/toc.html

At the moment you Java and other OO languages allow and encourage inheritance by default.  Surely the language should disallow inheritance by default.

Instead of having to explicitly declare something final, shouldn't everything be final unless the programmer explicitly decalares it inheritable?

Ged Byrne
Wednesday, March 12, 2003

Fascinating idea, Ged.

John Topley
Wednesday, March 12, 2003

That's how C# works - all methods are non-virtual by default. So if you want to inherit, you first have to mark the method virtual, and then the subclass has to specifically say it is going to 'override' that method (as opposed to using 'new'). I really like that about it.

Another question could be, should classes be sealed by default... I don't think so, but it is the next step I guess.

Robin Debreuil
Wednesday, March 12, 2003

Robin, what exactly do you mean by 'Sealed'?

Ged Byrne
Wednesday, March 12, 2003

Refactoring in C# is a pain. Adding and removing virtual modifiers takes time, and you have to remember it.

And, if I am not mistaken, you only have to declare a method as virtual in the base class. All descendants thereafter is free to override.

When we are deciding today what should be allowed for the future (using final, sealed, virtual, whatever) then we are guessing. And every restriction is reduced flexibility and freedom.

I argue that we can't possibly know what the proper use of our object today will be 6 months from now. So we can't design for it, either. We should design for today's needs, not tomorrow. If we discover that a class must be sealed, or a method can't be overridden, then fine. If not, we should let our object be open for future extentions.

Thomas Eyde
Wednesday, March 12, 2003

One of the reasons for introducing inheritance so soon must be to explain where the .toString came from without us writing it.

Books and courses do tend to approach the language from first principles. Looking at exisiting code and working out how it works seems to be a far more useful way of learning (for me, anyway).

backintheday
Wednesday, March 12, 2003

I think inheritance is emphasised in the early stages of learning OO because it has such potential power. If someone comes to OO wondering what it's about and whether they should be interested in it, they will look at the pillars. Encapsulation looks like it might reduce mistakes, but you have to write extra code, and every developer secretly believes they won't make those mistakes anyway. Polymorphism looks nice, makes some things conceptually easier, but won't reduce the amount of code you have to write. Ah, but inheritance! Who can resist the thought of being able to create a whole new piece of functionality by just rewriting the few bits that are different? Everyone can see the potential. Everyone can think of a case where that would have been useful. Also inheritance is unquestionably Object Oriented. It's distinctive. If you are using inheritance, eveybody knows you are writing OO code. I think that's why teachers teach it early - it feels like you are really learning something new.

David Clayworth
Wednesday, March 12, 2003

Sealed in C# is the same as final in Java, I believe.

C# is the same as Delphi in the respect that methods have to be specifically marked as overridable. Anders Hejlsberg is the common factor in both languages. I read an interview with him in which he explained why this is the case (it's for performance reasons) but unfortunately I can't find a reference to it now.

John Topley
Wednesday, March 12, 2003

This reminds me of the "OOP Oversold" article at:

http://www.geocities.com/tablizer/oopbad.htm

Which might hold a shred of a viewpoint if you can get past the author's unpleasant ranting.

I've found inheritance useful for limited domains, such as user interfaces, but not very helpful for many common tasks. In most cases, simple prototyping ("cloning") is good enough.

Edoc
Wednesday, March 12, 2003

To answer the question about sealed classes in C#: By marking a class as sealed, you tell the compiler that no other classes can be derived from it.

I do agree that inheritance is a pillar of OO, but that is often grossly abused. Interfaces are a somewhat abstract concept, but anyone can understand inheritance. It just makes sense in our minds. Interfaces take a second longer to click.

Unfortunately, many people aren't well versed in good OOP practices. Just recently, I was reading a book that discussed building a multi-threaded server built on UDP. In it, the author built a Mutex class, then derived his Thread class from Mutex and then finally built an Application class derived from his Thread class. A simple "is a type of" test would have shown that his class structure was broken.

Go Linux Go!
Wednesday, March 12, 2003

The Mutex->Thread->App person mentioned above was mis-using inheritance in a classic way -- to 'inherit' functionality (Instead of Mutex used-by a Thread used-by an Application).

I think that if you teach newbies to use interfaces (and interface inheritance) to define the 'is-a' relationships, you make them understand inheritance better, because implementation lazyness will not be the reason they use it...

Tal Rotbart
Wednesday, March 12, 2003

John Topley mentioned above that Hejlsberg (the sell out that he is, sorry...) chose the explicit override approach for performance reasons.  To clarify that -- Hejlsberg wanted to avoid the v-table lookup for each method call (plus enabling inline optimization in some cases) which Java curtails for having methods all be overrideable.

An example scenario where this is disasterous --
You need to inherit from a component and override a certain method to fix a bug that the no-source-code 3rd-party component includes (and which they intend to fix somewhere in Q4).
Well, you just can't, because they haven't explicitly allowed this method to be overrideable. So, instead, you need to delegate and redirect the entire API of that component...  And of course that 3rd party have not provided an interface for that component... *Doh*

Tal Rotbart
Wednesday, March 12, 2003

"the sell out that he is, sorry..."

Well he had the choice of remaining at Borland whilst Delphi was evolving into a Web development tool and Borland was wasting millions on silly name changes, or moving to Microsoft and getting the chance to work on the development of a new development platform and language. And get paid a lot of money for doing it.

Not a difficult choice IMHO!

John Topley
Wednesday, March 12, 2003

(IIRC, java programmers tend to escape the performance problems by using obfuscators, which can declare non-overridden methods "final."  This isn't normally decidable, but the programmer can tell the obfuscator not to worry about weird cases.)

Tj
Wednesday, March 12, 2003

Tal,

If you are talking .NET, with the full metadata present in the assembly generating the code for a quick full front aggregate should be a piece of cake.

Just me (Sir to you)
Wednesday, March 12, 2003

"If we discover that a class must be sealed, or a method can't be overridden, then fine. If not, we should let our object be open for future extentions. "

The thing is, it is much easier to unseal a sealed object in version 2, than put the genie back in the bottle. Imagine doing a library, releasing version 1, and then maybe for security reasons realizing that a class/method that is now being used by an untold number of third party projects has to be taken away. This is the nightmare scenario of leaving all your loose strands open.

If you do leave something open, you better be sure you can work around any problems that may crop up with it - that alone can add hundreds of hours to a project if you tend leave things open indiscriminantly. It's not that different than making all your methods and properties public, in the sense that you are commiting to defend against any abuse that can come from below (which can be much nastier with access to all that is protected) as opposed to just from the outside. Also, helper methods that could have been private may now need to be protected, so you tend to expose a lot more of your interface than you would have with a sealed class. Which means more documentation, more thinking, more money etc. etc...

I do think inheritance is great, but only in places you are willing to support it!

Robin Debreuil
Wednesday, March 12, 2003

Not wanting to stray too far from the original line of argument, may I suggest that the principal problem underlying the 'inheritance' issue is typically a lack of thought about, or even understanding of, the real relationships that exist between objects (i.e. instances of a particular class).

The standard description that inheritance implements an 'is-a' relationship falls over when you ask the simple question, what do you mean by 'is-a'? My personal preference is that it means 'is a form of' in the sense of a Linean taxonomy with inheritance thus being used to implement aggregation or specialisation of characteristics.

The trouble lies when inheritance is used to implement other relationships or, particularly, to modify behaviour. The classic case of this is creating sub-classes based upon the role that an objects can play, for example Employee or Customer or Patient or Doctor or.......

David Roper
Wednesday, March 12, 2003

I think the core feature of OO being discussed here is polymorphism. Polymorphism can be implemented using inheritance or composition or a combination of both.

Inheritance can be used poorly as with any language feature.  Just because it is often abused does not make it a bad idea that should be prohibited. By that argument, we'll have to eliminate all language features, since they all can and are used poorly.

DavidG
Wednesday, March 12, 2003

DavidG,

The argument is not that Inheritance should be prohibited, but rather restricted.

At the moment Inheritance is presented as a basic skill needed to use OO.

Since it is so powerful and liable to misuse, shouldn't it instead be treated as an advanced technique that should be used with caution.

With regards to a specific class, the argument is that inheritance should be prohibited unless the class has been designed with inheritance in mind.  As Robin points out, inheritance can cause problems if it is not done properly.

Ged Byrne
Wednesday, March 12, 2003

In my opinion, the most important feature of OO programming, the thing that makes it different from what came before, is polymorphism. Inheritance is a useful reuse technique, and encapsulation can be done in just about any other language.

In early OO systems, like Smalltalk, polymorphism and inheritance were independent concepts. If you called obj.foo, ANY object that implemented the foo method could be used regardless of it's base class. This behavior is preserved in Python and Javascript and other languages, and it's a real boon. Smalltalk didn't support multiple inheritance OR interfaces, but it didn't need them - as long as the object supported the specific methods you need, it's good.

However, OO really took off when C++ started getting mindshare. In C++, every variable must have a declared type. As a result, polymorphism and inheritance got tied together, since inheritance was how you linked types together. In these languages, class A's version of foo is different from class B's version of foo, and never the two shall meet.

Java arguably took the worst combination of the two techniques - single rooted inheritance tree + static typing. As a result they had to hack interfaces in to work around the limitations.

Anyway, my point is that in order to teach the important thing (polymorphism) instructors using C++ or Java were forced to first teach inheritance, because these languages require inheritance to implement polymorphism.

Chris Tavares
Wednesday, March 12, 2003

Oh, and just a minor comment on inheritance.

Several posts have mentioned that inheritance should only be used for an "is-a" relationship. I would argue that this term is vague and confusing.

What inheritance should be used for is an "is-substitutable-for" relationship. This is known as the Liskov substitutability critera, and it's an important thing to remember. It makes it much easier to reason about inheritance.

For example:

Square vs. rectangle.

Square is-a rectangle? Yep, at least in the geometric sense.

Square is-substitutable-for rectangle? Nope.

Why? Well, suppose that the interface of rectangle does something like this:

class Rectangle {
    void setWidth();
    void setHeight();
};

In a rectangle, you're allowed to set width and height separately. In a square, changing one changes the other. As a result, the "contract" of the Square class has a constraint that rectangles don't, and so you can't use a square everywhere you use a rectange.

Therefore, square should NOT inherit from rectangle.

Hope this helped,

-Chris

Chris Tavares
Wednesday, March 12, 2003

Chris,

Thanks for that.  I was trying to figure out what made OO in Smalltalk and Javascript different from Java and C++.  I knew it had to have something to do with static typing, but wasn't sure.

Ged Byrne
Wednesday, March 12, 2003

I think teaching how to subclass and why that is useful should be right in the basics. Teaching how to make a class that can be subclassed should come sometime in the second semester ; ).

Robin Debreuil
Wednesday, March 12, 2003

The kinds of polymorphism offered in languages like Smalltalk is very different from what is available in C++.  The former offers syntactic polymorphism, while the latter provides semantic polymorphism.  The choice is a trade-off between flexibility and risk.  Bertrand Meyer provides an excellent discussion of these issues (as well as just about every issue concerning OO) in his book:

http://www.amazon.com/exec/obidos/asin/0136291554

I used to develop large systems in Smalltalk and never found lack of static typing to be the issue some folks claim it is.  More often it was a lifesaver.  I can see its value, though.

One thing I do not like about Java's type system is that it goes way out of its way to ensure semantic consistency, but defeats the entire effort by not providing  generics.  As soon as you put things into collections, you take all the risks associated with dynamic typing and get none of the benefit.

As far as inheritance goes, I use the approach described by a previous poster based on the Liskov Substitutabilty Principle most often.  Robert Martin discusses this in his book:

http://www.amazon.com/exec/obidos/asin/0135974445

passerby
Wednesday, March 12, 2003

Robin,

An important distinction.  I think that makes absolute sense.

Creating applets could be an ideal introduction to subclassing when teaching Java.

Ged Byrne
Wednesday, March 12, 2003

"Therefore, square should NOT inherit from rectangle."

No, a *modifiable* square can't be used where you expect a *modifiable* rectangle.  If squares and rectangles have value semantics (their properties are defined at creation and can't be modified), then there's no problem at all.

rwh
Wednesday, March 12, 2003

Which just comes back to the fact that no one type hierarchy will work for all systems.

Chris Tavares
Wednesday, March 12, 2003

Chris wrote:

Java arguably took the worst combination of the two techniques - single rooted inheritance tree + static typing. As a result they had to hack interfaces in to work around the limitations.

-------

Interfaces in Java are no more of a hack then pure abstract classes in C++ are a hack. In fact, interfaces in Java are rather elegant, because they bring the concept of interface inheritance into the language as a first class feature while entirely sidestepping the issues associated with multiple implementation inheritance.

and Chris said:

If you called obj.foo, ANY object that implemented the foo method could be used regardless of it's base class.

-------

What you describe with Smalltalk is structural conformance. The problem with structural conformance is that the implementation details of a method are exposed to the user in order for them to discern the implicit interfaces of the methods parameters. It should be obvious that this is bad - especially for programming in the large. You can see the same problem with C++ templates, which are also based on structural conformance.

Toby Reyelts
Thursday, March 13, 2003

People probably read too much into Java.  Generics is the main big thing for Java 1.5.

Tj
Thursday, March 13, 2003

IMHO inheritance per se is not a pillar of OOP.

You need to look at what stands behind the idea of
inheritance which is re-use. This is the most important
idea and not inheritance. Using interfaces and composition
is just another way to do re-use.

Just my 2 cents ...

Liron Levy
Thursday, March 13, 2003

Chris wrote:
"In a rectangle, you're allowed to set width and height separately. In a square, changing one changes the other. As a result, the "contract" of the Square class has a constraint that rectangles don't, and so you can't use a square everywhere you use a rectange."

I'm sorry, but I don't buy this argument:

William of Occam wrote "Do not multiply objects without necessity" (Occam's Razor). If two different classes enforce the same contract, why have two classes at all?

Chris's example illustrates the degree of ambiguity present in the English language. Square and rectangle do present the same interface: They both have 4 sides, and they both have 90 degree angles at all verticies.  As rwh pointed out, the problem might come with modifiable squares and modifiable rectangles - a distinction that would be made as soon as one tried to design a class hierarchy.

The distinction between "is-a" and "is-substitutable-for" needs to be made with care. The ideas of design by contract and inheritence do interact, but when posing the question "Is A substitutable for B?" one needs to be aware of who is doing the substituting. There are not two actors involved (A and B) but a minimum of three: "Is A substitutable for B from the point of view of C?", where C is the class or entity that would be using A or B.

Returning to chris's example: If I am building a wall, then I do not care whether the blocks I use are square or rectangular, as far as the builder is concerned, the contract must enforce that they have parallel sides. However, if I am creating windows or panels, then the independence of setHeight() and setWidth() may be an important part of the contract.

treefrog
Thursday, March 13, 2003

The whole Square vs Rectangle problem comes down to contract rather than behaviour.

A Rectangle may have all sides of equal length and it is a square at that time.  However, it is not guaranteed to be a square.  There is no contract that height will always be equal to width.

When an object is declared as a Square, it is being explicitly stated that this is a square, and that height must always equal width.

Ged Byrne
Thursday, March 13, 2003

A Square and a Triangle both implement the 'GeometricShape' interface which inherits from 'Shape'.
However, both Triangle and Square are implemented differently:
Square inherits from Rectangle (which inherits from Polygon) but enforces the 'equal width and height' contract.
Triangle inherits from Polygon. etc.

Get the drift? :)

Tal Rotbart
Thursday, March 13, 2003

I always thought the square/rectangle was one of the worst examples used to describe inheritance, I'm not sure why it always comes up... Having actually programmed a lot of graphics (and thus done more than one oo shape hierarchy) the conclusion I always come to is, why the hell do you care if it is a square or a rectangle? You can constrain a rectangle and you can check if both sides are equal if you need to. If that is not enough (it always has been for me though), you obviously need a separate unrelated shape. It would take some very broad categories, proabably unrealted to geometry, before a 2Dshape class would need more that one level of inheritance imo.

The overworked shape sample is probably a major reason I (and no doubt others) went hog wild with 10 levels of inheritance in the first while. I know it isn't said to be for real world use, but why use a sample that encourages the very thing beginners seem to always get wrong? It is a good real world example of what not to do with inheritance I guess, if they would only introduce it that way...

Robin Debreuil
Thursday, March 13, 2003

Toby Reyelts wrote:

"What you describe with Smalltalk is structural conformance. The problem with structural conformance is that the implementation details of a method are exposed to the user in order for them to discern the implicit interfaces of the methods parameters. It should be obvious that this is bad - especially for programming in the large. You can see the same problem with C++ templates, which are also based on structural conformance. "

I would disagree with this statement. My experience with templates in C++ and programming in Python (another language that uses what you call structural conformance) has been the exact opposite. Instead of introducing bunches of classes and interfaces that exist solely to get get around the type system, I can write more flexible code in fewer lines in less time.

Chris Tavares
Thursday, March 13, 2003

Square is a Rectangle, Rectangle is a Square??

hmm... I thought that squares and rectangles (along with parallelograms, trapezoids, and rhombuses) are  *quadrilaterals*.

apw
Thursday, March 13, 2003

Chris wrote:

"Instead of introducing bunches of classes and interfaces that exist solely to get get around the type system, I can write more flexible code in fewer lines in less time."

Can you give an example of this "more flexible code" and "bunches of classes and interfaces" of which you speak?

// A C++ function that adds two objects
template <typename T> T add( T t1, T t2 ) {
  return t1 + t2;
}

This looks practically identical in Smalltalk, and I imagine Python, too. The primary difference is that the C++ compiler will fail at compile time if T doesn't implement operator+(), while the Smalltalk and Python runtimes will raise exceptions at... run time.

God bless,
-Toby

Toby Reyelts
Thursday, March 13, 2003

Yes, but you're using templates - the very thing you complained about in your previous message! I've got no problem with templates (well, no problem other than that the syntax gets overly complicated when used for template metaprogramming, but that's a whole different discussion).

So, how would this example look in C++ WITHOUT templates?

Let's take the example you give - adding two objects. Well, you'd need to have each object implement an Add method of some sort. So we have:

IAddable Add( IAddable a, IAddable b ) {
    return a.Add( b );
}

Ok, we've just implemented IAddable strictly so that we could sandwich various types into the system. It doesn't matter that Point, Rect, and Complex all implement operator+ already - to be used with this method, the parameters MUST be derived from IAddable.

(Not to mention the problems C++ will have with object slicing and the return value here. Pretend it's Java instead. :-) )

And suppose now we want to subtract two objects as well? Well, I suppose we'd do:

ISubtractable Sub( ISubtractable a, ISubstractable b ) {
    return a.Sub( b );
}

Fairly simple. Now, we want to combine the two, say to calculate ( a + b ) / ( a - b ).

ISomething Calculate( IAddableAndSubtractable a, IAddableAndSubtractable b ) {
  ...
}

We've now added a whole bunch of arbitrary interfaces to work around the fact that the type system requires a matching VTBL rather than matching via method signature.

I hope you see my point here.

Templates makes this problem go away. Signature based polymorphism (like Smalltalk or Python) makes this go away.

Wether it complains at run time or compile time is a different issue really. You claim that compile time errors are better; I claim that compile time checking is overrated. I'm willing to let that argument go; I doubt we'll be able to convince each other.

Chris Tavares
Thursday, March 13, 2003

Regarding this square rectangle thing, it's perfectly OK to ADD RESTRICTIONS when creating a subclass. Maybe this exapmle will help:

Example:

PERSON.

MAN is a type of PERSON and is an appropriate subclass.
WOMAN is a type of PERSON and is an appropriate subclass.

WOMAN has method giveBirth().
MAN does not have method giveBirth().
*So* giveBirth() should *not* be a method of PERSON, but of WOMAN only.

Now, SetWidth and SetHeight separately does not make sense for square. But you could add the restriction to square that setting either one also sets the other one. Or you could derive square and rectangle from a common parent, which is probably a better solution.

Dennis Atkins
Thursday, March 13, 2003

I'm with apw on the quadrilateral thing, that was the word I couldn't remember for the parent class.

Dennis Atkins
Thursday, March 13, 2003

Adding a method to a subclass is not a restriction.

If you study programming with contracts, you will see that one of the key consistency rules is that subclasses must honor the contracts of their superclasses.  If you write a contract correctly, it states what the value of each basic function on a class' public interface will be when you call a procedure.  The square/rectangle example would fail this test.  It would resemble something like this:

class Rectangle

method SetHeight(pHeight long)
preconditions: ...

mHeight = pHeight

postconditions:
GetHeight = pHeight
GetWidth = old.GetWidth

The square could not honor this contract and also honor the constraint that its width and height are equal.

.- / --. ..- -.--
Thursday, March 13, 2003

Chris wrote:

"Yes, but you're using templates - the very thing you complained about in your previous message! I've got no problem with templates..."

Sorry, I think we've managed to cross wires here.

You complained that Java interfaces were a hack, I said they weren't. I described how C++ templates provide the structural conformance you're looking for in Smalltalk. I complained about how structural conformance (both in Smalltalk and C++ templates) exposes implementation details. You did not address that issue.

So, to recap, what I'm saying is that the static typing, interfaces, and single inheritance that you lament in Java is elegant, is unrelated to the structural conformance you miss from Smalltalk, and the lack of structural conformance is more elegantly solved in statically typed languages through a generic mechanism.

Aside from that, I believe people would be better served if the generic mechanism required type constraints, because it would prevent the exposure of implementation details that occurs with structural conformance. Essentially, if Java were to go with something like PolyJ, you'd be able to write compile-time type safe generic methods that don't expose implementation details. Much better than Smalltalk, and unrelated to interfaces, etc...

God bless,
-Toby Reyelts

Toby Reyelts
Thursday, March 13, 2003

If I would see a shape class, in an actual program, that went
Shape>Polygon>Quad>Rect>Square
I would call that a very bad design and inheritance gone wrong, wouldn't you? What if someone casts a square to a polygon for a single instruction - then what? At very least you are creating far more work than you are saving.

As for catching bugs at compile time or runtime, I agree it isn't that big of difference - it is the ones that you don't catch at runtime that hurt. I my experience non-typed languages allow a lot more of those through by their nature.

Robin Debreuil
Friday, March 14, 2003

The whole square/quadrilateral/rectangle problem highlights one of the weaknesses with inheritance.  Objects invariably belong in more than one set.  Here we have three sets:  Polygon, Regular Polygon and Quadrilateral.

Regular Polygon and Quadrilateral are both sub sets of Polygon.

There is an intersection of Quadrilateral and Regular Polygon – Square.

Regular Polygon also contains non-quadrilaterals such as equilateral triangle.

The whole thing is confused because we are using the inconsistent naming of mathematicians – why not call a square an equilateral rectangle?

So in the design stage it has to be decided what the priorities are.

Rectangle as an abstraction of Square?

Square and Rectangle as siblings on the inheritance tree?

No square class at all, just an IsRegular method for all polygons?

This all depends on the purpose of the object model.

For me this was one of the key things to grasp before finally understanding OO.  My naïve approach was to try to model reality properly, and then reap the benefits from my extra effort.  Truth is that OO just doesn’t provide the tools to model reality that accurately.  You still have to make choices and trade offs when drawing up the design.

Ged Byrne
Friday, March 14, 2003

Ged,

You have just opened the door to an even messier subject, multiple inheritance.

In the real world, objects are not part of any hierarchy whatsoever. Hierarchies are a human invention, for classification purposes, and as has been pointed out, it depends on your purpose which classification you use.

So, objects can belong to any classification hierarchy you choose, as long as it honors the rules of the classification system.

Multiple inheritance can be used to support this, but sadly, multiple inheritance can be abused even more than inheritance alone.

Still, both are good tools, provided that they're used correctly. Otherwise they'll come and bite you, eventually.

Practical Geezer
Friday, March 14, 2003

"In the real world, objects are not part of any hierarchy whatsoever."

There is a DNA hierarchy amongst living things. We share some DNA with yeast, for example.

John Topley
Friday, March 14, 2003

Living things inherit traits from their parents, thus creating a hierarchy, because living things have ... well ... parents.

The rest of the world is not always so well-organized.

Seth Gordon
Friday, March 14, 2003

Anyone who thinks Inheritance is interchangeable with Interfaces have no fucking clue what OO is about.  Get youu head out of your ass and out of your textbook.  when you code a real OO project,you will not need to ask these idiotic questions. 

Bella
Sunday, March 16, 2003

Got a hangover, bella?


Sunday, March 16, 2003

Bodies are a contract of organs, which are a contract of tissues, which are a contract of cells, which are a contract of molecules, which are a contract of elements from the periodic table, which are a contract of atoms, which are a contract of elementary particles, which are a contract of subatomic particles, which are a contract of superstrings, which are a contract of dimensions of energy . . . looks like inheritance all the way.

JW Peppah
Sunday, March 16, 2003

You confuse composition with inheritance.

Practical Geezer
Monday, March 17, 2003

" ... the idea that reuse was achieved via inheritance, was originally conceived out of experience with the only other serious object-oriented platform at the time, Smalltalk. In Smalltalk, all reuse was done through inheritance--if you wanted to make use of a class, you inherited it, and added whatever specialization, overriding, or new behavior desired. Unfortunately, while this works in a loosely-typed environment like Smalltalk, it doesn't work in a strongly-typed one like C++ or Java. What results is a nightmare scenario where the base classes in the hierarchy can rarely, if ever, be modified without breaking (usually in spectacular fashion) every single one of its implementation derivatives. This was commonly called the fragile base class, or FBC, problem. In the long run, it prevented successful evolution of base classes once released into wide use."

From: http://www.neward.net/ted/weblog/index.jsp?date=20030317#1047892871572
Interesting read in Ted's ongoing "Effective Enterprise Java" saga.

Just me (Sir to you)
Monday, March 17, 2003

*  Recent Topics

*  Fog Creek Home