Fog Creek Software
Discussion Board

Client/Server design for slow network

I'm looking for design patters or papers/books about successful client/server designs for slow networks.

I need to design, and implement a system in .NET, where several offices (with 4-8 people in each office) need to connect to a central server and database with a rich client application. There is a 128-192 KBits pipeline for one office.

For transport XML is too verbose for this line I think. I'm going to test remoting for this solution.

Other than the network protocol, do you know/did you solved problems like this, eg:

- API design, rich or thin interfaces
- cacheing


Wednesday, February 4, 2004

My company regularly sends hundreds of megabytes using a straw (business dsl, also shared by other employees--*ahem* mpeg movies *ahem*). Sometimes they are file transfers, or long running inserts and selects, or bcp/dts. What we found out is that dsl is unreliable (no news there) and the db libraries can't handle these line faults gracefully. At the end of the day you might find yourself taking matters into your own hand: perhaps by setting up secure ftp, chunking the file transfers, MD5 digesting the chunks, and verifying the transfer process with email confirmations, and have intelligent resumption services after unexplanable outages. It could be ftp, or web services, doesn't really matter, whatever works within your security guidelines and is reliable.

For large jobs you'll want it to run on the datastore it access to the most (especially for large batch jobs). At the very least it should be on an application server over reliable lan to the datastore. Don't tempt fate by pretending that DSL/@Home is going to replace guaranteed services any time soon.

Li-fan Chen
Wednesday, February 4, 2004

For small jobs (where traffic is not a concern), one of the things you can do is compression, xml packets aren't that big, but all that marshalling IS expensive processor wise... remoting helps a bit.

What would really help is sending complex xml data islands (Microsoft speak). This is better than making many calls of simple data islands.  First it gives you a chance to reduce the amount of unnecessary code, and reduce unnecessary server round-trips.

Remoting are usually at the object or function/method scope. At the very low level, if the call is small enough one function call can piggy back on the ip packet containing a previous function answer.

Li-fan Chen
Wednesday, February 4, 2004

Make it thin client.

Wednesday, February 4, 2004

A: can you tell me why do you think thin client would work in an environment where bandwidth is one of the most important issue? I'm not sure if html is bandwidth friendly

Wednesday, February 4, 2004

Just see the web success. When it started in 1996, people accessed with < 32k modem and it worked!

Successful internet companies today (amazon, google) don't require a broadband access for their service.

However, if you absolutely need ergonomic interface for high speed querying or data entry, a rich client is indeed better.

Wednesday, February 4, 2004

Often I've seen a proxy program hang out by the client side to cache the data and to help handle errors gracefully.

Perforce does it.  X11R6.3 ("Broadway", anyone remember it?) tried it too.  Success varied.

Not to start a flamewar, but network errors are great places for exception handling... But it doesn't help if the code's in someone else's DAO/ADO/ODBC/JDBC/whatever stack.

H. Lally Singh
Wednesday, February 4, 2004

Why do you think that is slow? What does slow mean
to you? Is it latency? Is it throughput? Before you
go designing you need to run some tests to see
what your performance is and then decide what
performance you need.

Those speeds for that many people seems pretty
fast to me. The only obvious thing is to not do
many small operations over the wire.

son of parnas
Wednesday, February 4, 2004

I don't know if this suggestion is any good, but I once worked on a system that was designed to work from remote locations (jungle or offshore rigs) where the comms were likely to be via satellite modem, i.e. slow, huge latency and unreliable.

The solution we chose was to keep local replicas of the database and use a home grown replicator built on IBM's MQ Series reliable messaging middleware to synchronise the remote and central databases. The home grown part was essentially doing stuff that we'd do today by serialising objects as XML.

David Roper
Wednesday, February 4, 2004

*  Recent Topics

*  Fog Creek Home