Fog Creek Software
g
Discussion Board




Design question involving transactions

I've come across a situation I'm sure people here have experienced, so I thought I'd get some feedback.

I'm working on an application that makes use of several objects that I've written in the past. Each of these objects has their own data access layer. Everything's been working dandy for years as I've congratulated myself for safely abstracting the database layer away and keeping it internal to the class.

Now, my application needs to make use of these objects, but it needs them to participate in the same transaction. For example, my client pseudo-code would look something like this:

Get a connection to the database.
try {
  Begin transaction {
      Call foo.method()
      Call foo2.method()
      Call foo3.method()
  Commit
}
catch { rollback }

The problem is, since foo, foo2 and foo3 all handle their database connectivity privately, I don't have any way of instructing them to participate in the transaction.

My first thought is to overload the constructors in the foo classes to allow passing in a transaction. But that just seems ugly since I've had a preference for keeping database details internal to the class. But I can't think of any other way.

How would you handle this?

FWIW, I'm using C# and SQL 2K, but like many design issues, this seems to transcend a particular language.

Mark Hoffman
Friday, May 21, 2004

How about subclassing foo, foo2, and foo3 and overriding your data access methods to assume that you are inside of a transaction (may require some refactoring)?

Yo
Friday, May 21, 2004

I would recommend creating a new object that encapsulates the functionality of all three, possibly using the actual objects, and wrap the business in a transaction.  Of course you're screwed if they each use independent connections, but hopefully your design is better than that.

Clay Dowling
Friday, May 21, 2004

I think you're on the right tracks with the idea of passing in the Transaction to the Constructors.

I really like the MS Data Access Application Block for .NET:

http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnbda/html/daab-rm.asp

It does this in a similar way, via overloading.

I also like to have all my updates, etc done by SPs. Each SP will only start a transaction if it needs to, otherwise it'll create a savepoint. In this way, the SP either works or has no effect.

This may not be exactly what you want, but it works for me. It also allows nested SPs to use savepoints rather than new transactions, while calling SPs directly does create a new transaction, all automatically.

Steve Jones (UK)
Friday, May 21, 2004

I think there are two good options here.  One is what Steve mentioned about moving the transaction details into Stored Procedures...that way, the objects don't have to know/care whether they are part of a transaction or not.

Another method would be to add a layer to your data access tier to shelter your app from having to pass in a Transaction object.  I would implement this as a generic component, such that it can take in any number or type of objects (which would ideally implement a common interface and still be responsible for their own object-to-SQL translations), and persist them all within the same transaction.  This also avoids FooA having to know anything about FooB.

On top of that, you could have another layer which contains the logic for which operations should occur only within the context of a transaction, or you could let your client app handle this if it isn't too complex or varies widely.

Joe
Friday, May 21, 2004

This really pops up now with the use of AOP and
containers to "transparently" handle transactions.
I don't think transparentness is really possible
except in the simplest cases where an operation
is truly independent and isolated. This isn't the
common case overtime as complexity increases.

So i think making transactions explicit is better
than trying to make everything magic. I use
the idiom if a transaction object is passed in
then use it, if not then allocate one from a
factory.

son of parnas
Friday, May 21, 2004

Are these serviced components (i.e. deriving from System.EnterpriseServices.ServicedComponent? I know the .NET facists believe COM is the antichrist, however the services of COM+ can be highly beneficial)? If so take a look at automatic enlistment, which basically makes every distributed transaction capable resource automatically take part in a transaction without any explicit assignment.

http://msdn.microsoft.com/library/default.asp?url=/library/en-us/cpguide/html/cpconautomatictransactionsnetframeworkclasses.asp

Dennis Forbes
Friday, May 21, 2004

Dennis,

Thanks for the link! I had just started to read up on .NET Enterprise Services...I've known of their existence and their basis on COM+, but I've never had any need for it in the past. Until now.

After posting my message here, I kept digging around and realized that the Enterprise Services looks more and more like a potential solution. Regardless, it's something I need to spend more time learning about anyway.

Mark Hoffman
Friday, May 21, 2004

My personal preferred approach is to provide two versions of each method that interacts with the database: one that grabs its own transaction and performs its own operation atomically, and another that takes a transaction and performs its operation in that transaction.

I don't like passing the transaction (Connection object, in my case, which is Java) to the constructor, because I don't like to tie the lifetime of the object to the life of the transaction - I prefer to pass the Connection to the method itself.

Because of this, I don't like the managed transactions that try to hide the complexity of the database.  Depending on the framework you're operating in or the flexibility you have, a solution like I use may or not be possible or feasible.

schmoe
Friday, May 21, 2004

schmoe, that's an excellent point about tieing the
lifetime of the object with the transaction.

If the object is temporary for the duration
of the transaction it's ok. But if the object is long
lived it doesn't make sense.

A lof of people in application servers seem
to be creating objects for each request
and tossing them afterwards. So it may
depend on activation/passivation/cacheing.

son of parnas
Friday, May 21, 2004

Good points on the validity of making things "transparent."  I think Joel's article on the Law of Leaky Abstractions applies...

Enterprise Services and COM+ can be great, but only if your application actually warrants such techniques.  If I understand it correctly, the real power of COM+ comes into play under very heavy loads, high availability scenarios, and stringent delegation-based security models.  Hence the term "enterprise" services.

It does add complexity, so you have to weigh the benefits vs the extra work.

Joe
Friday, May 21, 2004

"If I understand it correctly, the real power of COM+ comes into play under very heavy loads, high availability scenarios, and stringent delegation-based security models.  Hence the term "enterprise" services."

COM+ is the newer variant of MTS - Microsoft Transaction Service. The role of it was to solve _precisely_ the problem that Steve is mentioned: You want to object orient your system, taking advantage of benefits like connection pooling, but you also need to have transactions that span multiple objects (including disparate database systems, message queues, or anything else that supports distributed transactions). In newer versions they added minor features like object pooling and Just-In-Time activation, as well as COM+ roles (which is basic security which should play a part in all apps).

In other words, this isn't high falutin' technology that only the core system at Citibank uses - it's regular everyday technology.

Dennis Forbes
Friday, May 21, 2004

As a sidenote regarding the high load and high availability comment: While JIT activation and object pooling moderately increases load for some very specialized components, there is nothing intrinsic in COM+ that really fulfills these goals. You can couple COM+ with Application Center, but that's not intrinsic with COM+.

Dennis Forbes
Friday, May 21, 2004

Thanks for the correction Dennis! 

I figured high load scenarios into the equation because hosting distributed COM+ components on one or more separate servers creates more tiers in the app and provides a place to offload resource intensive operations while the front-end servers stay responsive for normal traffic...

Joe
Friday, May 21, 2004

That's DCOM, which is a core service that is used to connect to COM+, but can also remote any COM object. Put a CCW around any old .NET component and you can call it from another machine.

Regarding splitting onto separate systems, in the majority of situations, where people think they are developing "enterprise" systems (one of the most abused words in this industry, and has been diluted to mean "as complex of a solution possible"), it actually descales the architecture. In most uninformed setups it's like taking a 4-lane highway between A & B, and in the middle putting a 32-lane pitted dirt road. That's a whole other rant...

Dennis Forbes
Friday, May 21, 2004

IoC/Dependency Injection:

1. put an object into the environment
2. use it
3. use it
4. use it
...
9 . remove it from the environment

i.e.:

1. put a Connection/Transaction into thread local storage
2. get it from there in method 1. do sql.
3. get it from there in method 2. do sql.
4. get it from there in method 3. do sql.
5. commit.
...
9. remove it from TLS; rollback

Vladimir Dyuzhev (http://dozen.ru)
Saturday, May 22, 2004

"I use the idiom if a transaction object is passed in then use it, if not then allocate one from a factory." - son of parnas.

That's what I was saying earlier, just I use SPs to control it. They use an existing transaction, or create one if they need it.

Steve Jones (UK)
Saturday, May 22, 2004

*  Recent Topics

*  Fog Creek Home