Fog Creek Software
g
Discussion Board




Error handling pattern

All this talk about Exceptions vs Error code got me thinking about possible ways of implementing Error codes that is useful and straight forward. Well my first thought was to have a generic ResultObject class that contained a result Status, Message and Value. Any method would return this object and then you could specifically handle any return type within the normal executing code. The problem becomes how do you determine at design-time what the return value of the ResultObject is.

My first thought would be to subclass the ResultObject so you would have a different ResultObject for every different class you wanted to return. The realization that code bloat could potentially be enormous was very evident and immediately put me off that idea.

My next idea was to embed the ResultObject within all classes within my application. So anytime you get an object returned you could look at the ResultObject for information. But this seemed counter-intuitive to the way I program. For example, if I call a function that returns an object I make sure that the returned object is not null. If the object is null, I would like to have some reasoning as to why it is null. So my idea of embedding the ResultObject inside each class won't work because I could never return null from a method.

Then I stumbled upon an article on msdn about .NET generics (http://msdn.microsoft.com/msdnmag/issues/03/09/net/default.aspx). I am not a C++ programmer so I didn't immediately think of templates as the solution, but after reading the article about generics I think my pattern could be applied to C++ and templates.

I am sure I am taking too long to get to the pattern, so here it is.

Your method call would do something like this:

GenericResult<MyClass> myMethod(){
  // Generate GenericResult
}

Then when you call the method, you could handle it like this:

GenericResult<MyClass> result = myMethod();

if(result.value == null){
  MsgBox.Show(result.message);
}
else{
  MyClass resultValue = result.value;
  // .... process the result
}

Its debatable whether this is better than a try-catch solution, but I think the result is fairly clean. The pros is that it is type-safe to the particular return type for the appropriate method. There is also less chance of code bloat due to the GenericResult class.  And then there is Joel's resonings that you will always know the exit point of your code.

The cons are that you are completely ignoring the standard way of dealing with errors in the code, which may throw developers for a loop if they are using your APIs. I couldn't think of any more cons because I have had maybe about a half hour to think about it. I am sure more people will come up with downfalls of this method.

Gp
Tuesday, October 14, 2003

I think the problem with error codes is that you have to catch them after every function call, and then do something with them.  I think it might be better to pass around some error state object, so that it can be checked at any time.

Keith Wright
Tuesday, October 14, 2003

"if I call a function that returns an object I make sure that the returned object is not null. If the object is null, I would like to have some reasoning as to why it is null. So my idea of embedding the ResultObject inside each class won't work because I could never return null from a method"

How about something like the GetSafeHwnd member of MFC's CWnd? If "this" is NULL you could return a pointer/reference to a static "error object" which says "I don't really exist, I'm a figment of my own imagination" (or something like that).


Tuesday, October 14, 2003

I think a little bit more context needs to be established to frame a better debate. So, speaking only for .NET related development:

1. If the interface is meant to be used by others (3rd party developers, inhouse staff, etc) and you can't be there to assist in debugging then I think that ALL methods should:

  a) Throw exceptions if parameters are invalid -- this way the programmer writing the bad call can react immediately (one would hope).
  b) Return True/False or Object/Nothing as appropriate for all methods. Also could use the OUT parameter mechanism discussed previously.
  c)  Swallow all other exceptions -- I.e. after validating input parameters (which may require making asynchronous calls). -- and return False/Nothing as appropriate. Caller is responsible for retrieving and inspecting.
  d) Optionally provide a status callback for methods that might take arbitrarily long time to execute and/or can fail for reasons that have nothing to do with invalid parameters.

Consider a simple example: A method that writes a file from local disk to Windows SharePoint Services document library. In my approach the method would return True/False indicating success. The programmer would only get a parameter exception if they provided invalid SourceFilePath (path doesn't exist, file is locked, ...) or invalid Destination URI. The method may fail for a myriad of reasons (invalid credentials, network failure, restricted file type at server, document is checked out, etc.). These failures would be captures and the exception cached. The status callback could be used to provide ongoing status information about the progress of the method. Whether UI is displayed or not, whether the UI is modal or not, etc. now become the problem of the consumer of the interface.

NOTE: The True/False can obviously be turned into an Enum of possible success/fail values to ease development.

----------------

If, however, the interface is not meant to be used externally then I recommend using a FEW try/catch as possible. In many cases (personal experience) the reason exceptions are occuring is due to failure on my part to use the interfaces/methods I'm calling properly. Obfuscating this with try/catch doesn't help anything ... it simply hides the error and often results in further exception hackery -- creating new exceptions and passing them up the chain, etc.

If the interface method is NEVER meant to fail by design (i.e. a local memory write, UI calls, etc) then the above framework can / should be relaxed.

I have seen large code bases that have adopted the "we gotta protect the user (and other developers) from our programming errors" -- all the way down to the lowest levels of the internal implementations of the APIs. When you mix in multiple threads, x-process RPC, etc. this can become absolutely confusing and incomprehensible -- not to mention significant overhead in establishing all the checkpoints on the execution stack. You tend to see this more in macro-driven frameworks (MFC, ATL to name two) as opposed to simple integration via VB, JScript, etc. What this can results in is two classes of exceptions: the real ones that nobody gets to see and the simulated ones that are constructed arbitrarily by the API developers.

----------------

The most important thing, however, is consistency within an interface and across a family of interfaces. I've been publicly quoted about my views about the breadth and depth of the Groove APIs (from a  platform perspective). However, after 3+ years of work with these APIs I now see huge gaps and hiccups with regard to consistency: principally in the area of callbacks and asynchronous handlers. In some cases the underlying COM semantics are completely abstracted (this is often the oldest code in the platform). In other cases you have to understand the lowest level of COM advise, unadvise, connectionpoint, etc. to use certain classes (this is often a result of internal structures being exposed in order to support next release of platform). This mixed usage can occur in the SAME CoClass or family of interfaces. This, my friends, is BAD and shows the pressures of evolving a codebase in the face of market pressures.

On the flip side, I continue to find myself surprised at the level of thought that has gone into the .NET framework. Having $B's to spend and years to get it right help. Having been either producing or consuming frameworks for nearly 20 years I find the consistency of .NET on the one hand surprising -- and on the hand simplifying and empowering. Particularly once you start to master delegates, threading, etc. and start to construct your hand-rolled interfaces such that arbitratry methods can be called synchronously or asynchronously. Of course, the common runtime underpinning .NET allows much of this trickery -- it actually reminds of working on $1M Lisp Machines nearly 20 years ago.

-phil

Phil Stanhope
Tuesday, October 14, 2003

"a) Throw exceptions if parameters are invalid -- this way the programmer writing the bad call can react immediately (one would hope)."

No. Make an assertion fail in this case.

void f(int some_parameter, int some_other_parameter)
{
  assert(...some precondition...)

  ... proceed ...
}

Follow Design by Contract. A bad call is a bug in the caller, not an exceptional situation in runtime.

My 2 cents.

Daniel
Tuesday, October 14, 2003

So given Daniel's example, in Java should you fail an assertion if the parameters are invalid, or throw an IllegalArgumentException?

John Topley (www.johntopley.com)
Tuesday, October 14, 2003

IllegalArgumentException *is* an assertion failure in my book.

That's the kind of thing that should never be caught except by test harnesses (like the xUnit test runners). Otherwise, it's a programming error and should be fixed like one.

Chris Tavares
Tuesday, October 14, 2003

"Follow Design by Contract. A bad call is a bug in the caller, not an exceptional situation in runtime."

Amen.

DBC
Tuesday, October 14, 2003

But unless you're programming in Eiffel, or (at best) you have all the source code to all the libraries you use, how else are you going to check for the assertion except at runtime?

Chris Tavares
Tuesday, October 14, 2003

Assertions are always checked at runtime.  I don't understand the question.

DBC
Tuesday, October 14, 2003

When you already have the GenericResult object, why do you have to reason about Value being null? What would you do in situations where null is an acceptable answer, but still exceptions would occur?

Why not add an IsValid property and query it? If it says true, then any Value should be treated as correct.

Thomas Eyde
Wednesday, October 15, 2003

John,

"So given Daniel's example, in Java should you fail an assertion if the parameters are invalid, or throw an IllegalArgumentException?"

It's true that both systems help clarify a lot what the code expects, but assertions are more radical in the sense that you say, I didn't even design for this to happen. I'm only designing code for the preconditions to hold. My experience is in internal layers this is the right approach, though for a public interface of a library you may probably "negotiate with the world" and throw exceptions for invalid arguments. People are simply not used to assertions. That doesn't make them less correct, only less marketable :)

Chris,

"IllegalArgumentException *is* an assertion failure in my book."

The difference (or one difference) is that if you use the if (invalid) then throw... mechanism you usually can't change a compiler switch and generate a version with *all* assertions disabled, for extra performance.

Our release version comes with all assertion checking disabled. If it crashes, bad luck. We repeat the steps with the debug version and then usually some assertion fails, pointing us to the cause of the problem. Yes this is very context dependent, but in the context of our application it makes sense.

DBC,

"Assertions are always checked at runtime.  I don't understand the question."

If checked at all, of course :)

As an aside, some c++ library offers compile time assertions, but that's basically because their genericity mechanism isn't as rich as Eiffel's and they can only say

template <T> class ...

instead of (hypotetical syntax)

template <T: is B> class

where B is a required Base class.

Daniel Daranas
Wednesday, October 15, 2003

*  Recent Topics

*  Fog Creek Home