Fog Creek Software
Discussion Board




Design pattern?

Hi, I have a particular problem I'm trying to solve and I'm wondering if there are any patters or if someone has done this before...

I have a client-server system.

The client must send data and files to the server.

The data is all updated on a database within a transaction.

Sometimes the transaction can be quite complex.

The files that get stored only happen if the data is valid and stored. However if te file storage fails then the data must not be updated.

Some files can be quite large (i.e. network unfriendly) e.g. several MB.

Now my issue is how to handle this. I see three options

1.
a. Client sends data and files.
b. Server saves files, performs validation and update and if unsuccessful deletes files.

Problem: Large files sent across network but request may fail, meaning transfer unnecessary and network resource and time wasted.

2.
a. Client sends data.
b. Server validates data (as if it were doing the update for real and then backs out the transaction)
c. If unsuccessful action is failed. Otherwise
d. Server requests files from client
e. Server saves files
f. Server reruns validate/update (this time commits transaction)

Problem: Have to run the validate/update twice. with complex transactions this can make the process a bit slow but more importantly will adversely affect other clients.

3.
a. Client sends data
b. Server start transaction and performs validation
c. If validation unsuccesful then action is failed. Otherwise
d. Server requests files
e. Server saves files
f. Server runs update and commits transaction

Problem: Transaction is open for potentially a long time (and may even timeout). This will mean other clients are adversely impacted if they are trying to update data in the same area.

Any words of wisdom from someone cleverer than me as to how to handle this?!

Thanks in advance

Gwyn
Thursday, November 27, 2003

I'd think you would be more likely to get an answer if  you give details.

How fast is the network, how many concurrent users,, why are files several megabytes in size, how many people are likely to be updating the record at the same time?

Stephen Jones
Thursday, November 27, 2003

I'm not sure the details are relevant.

The network could be an Internet connection (a fast one - think ADSL), the files could be large (several meg),the data could comprise of 100's of records affected. There may be one main record (that noone else is updating) but many other 'shared' records that someone could potentially be accessing... and the point of a transaction is so that all records are updated as an atomic unit; even people reading any of the other affected records need to have a set that has integrity.

But as I say I'm not sure the detail is relevant.. it just gives the requirement and the requirement is.. well.. required!

I think there's enough info to describe the problem..

Gwyn
Thursday, November 27, 2003

How often do you expect failure to occur?

My experience is that in practice the failure case happens infrequently enough that the 'wasted' effort isn't really a problem (the user was expecting the transaction to take that long anyway).

What is the additional cost of the failure case. i.e. what will the user have to do to correct the problem?  If this eclipses the file transfer time there is no need to optimize that away.

I would tend to go with the simplest solution, 1.  Coming up with a more elaborate scheme that improves maybe 1% of interactions with the server isn't worth the risk inherent in the extra complexity.

If the 1% is in reality much higher then a more complex scheme might be justified.

Rob Walker
Thursday, November 27, 2003

How often for failure?

You know what users are like!

Actually a lot of validation is done on the client GUI before it gets to the server (which revalidates it because input can come through other client entry points).

So really the failure (due to validation) is going to be fairly rare. Hopefully failure due to environment will be rare too!

The only other case for a request to be rejected is if another user has changed something relevant in the time between the user starting their activity and actually sending the request. You know the deal. User1 requests record for update. User2 requests record for update. User2 updates, User1 tries to update and gets rejected. Again these should be rare.

So maybe 1%...

Gwyn
Thursday, November 27, 2003

I suggest you use solution 1, but if the transaction fails you store the files in the session of the client, and you tell the client not to send those files again when the client is retrying during the same session.

micje
Thursday, November 27, 2003

Looks like clever usage of MSMQ will do the trick here

Seemore
Thursday, November 27, 2003

Clever usage of MSMQ?

Can you give some ideas?

Note that the client could be non-Windows. It is at the moment but might not be in the future (just needs something talking over the TCP/IP connection)

Gwyn
Thursday, November 27, 2003

Well, if client can be Non-Windows , MSMQ is pretty much out of the picture, unless you want to use MQSeries
This type of problems are usually resolved using reliable transport message queue mechanisms over unrelaible network

Seemore
Thursday, November 27, 2003

Why not look at somethng that more or less does the same thing.

Are we talking about collaboration software of some kind - the files are actually the data that is being kept in the database?

Can you guarantee that nobody will need to send stuff in by modem or ISDN?

Stephen Jones
Thursday, November 27, 2003

I would like to consider things that more or less the same thing... except I don't know what does! That's kinda the point of me posting in case anyone else knows!

The best analogy is... imagine a Problem logging system where the user can include attachments.

There is data (about the problem) and then there are files (attachments).

The server must validate the data and store the files.

This is a client server app so looking at browser based technology (which normally involves sending the files every request and is generally flakey and unreliable anyway) is not useful.

There *could* be users over modem/ISDN.. but it's not really in that arena. Emergency access maybe in which case you'd expect it to be crap!

Gwyn
Thursday, November 27, 2003

If it is text files they can be zipped.
Just my 2c.

ttt
Thursday, November 27, 2003

"But as I say I'm not sure the detail is relevant.. it just gives the requirement and the requirement is.. well.. required!"

Details always matter. Requirements are often negotiable and they can often be transcended by good design.

"God is in the details" - Mies van der Rohe
"The devil is in the details" - favorite saying of Arms Control Negotiators.

Jim S.
Friday, November 28, 2003

So you are not going to have collaboration on the files?

Check out network bandwidth details, and then decide if you can go with one - save all- , or should do the second plan - check data and then send files. Possibly even consider both depending if on site or not.

Both God and the devil are in the details, and the most important detail is knowing which one.

Stephen Jones
Sunday, November 30, 2003

*  Recent Topics

*  Fog Creek Home