Fog Creek Software
Discussion Board




Leaving open communication standards behind

I am faced with an application server project where the request rate will measure in the thousands per second.  In house, we own the implementations of both the client and the server.  This leaves us free to choose a custom proprietary set up if we so choose.  I see the throughput and lack of latency requirements leading towards a communications stack with:

1.  Persistent, open socket connections that get re-used.
2.  A fast protocol
3.  A fast binary message body encoding scheme.

We have the engineering talent in house to do this.  No doubts there. 

Still, however, I am getting questions from IT and higher up the chain that can't believe we cannot build another J2EE app to do this.  I don't see HTTP, JMS, EJB, XML parsing etc meeting our requirements.  I see these technolgies as well suited for interoperability but I own both ends of this pipe.  If interoperability with outside organizations becomes a new requirement then we will build simple servlets to encode the app server responses in XML.

Has anyone else out there experienced this evolution away from some of the open communication standards as a business and application evolves over time?  At some point I see that a successful technology based company will have enough engineering talent in house and the funding to make some customization cost effective.

Emmet Jackson
Saturday, April 12, 2003

Do you plan on meeting the requirements with
one server or many? If many and a load balancing
approach you probably can meet your requirements
and use more off the shelf.

Another thing is to look at the requirements honestly
and see what they really are.

valraven
Saturday, April 12, 2003

Also note that modern servers are extremely fast, and you can often get away with insane request rates using inefficient protocols.

I benchmarked the parser/serializer in xmlrpc-c (a C library for processing XML-RPC messages) at well over 1,000 calls/second on a 233MHz server.  A good Java implementation on a modern server could probably do even better.

In many applications, the biggest bottleneck is round-trip latency; this can reach 0.5 seconds over the Internet backbone.  If you design to minimize round trips, you're doing pretty well.

Once you've fixed any latency problems, and bought a big enough server, XML often becomes the next bottleneck (especially if you have enormous numbers of clients and limited bandwidth).  XML eats bandwidth, and takes some extra work to parse.  You can reduce bandwith and parsing time with a dense, easy-to-parse text format (take a look at Scheme's s-expressions; these are pretty flexible and stupidly easy to parse).  Don't go to binary formats unless you've (a) got a good library to implement or (b) every other possibility fails.

Try hacking together some spike solutions using your regular J2EE tools, and measuring the performance.  You might discover that you don't need to do anything special.  If you do discover bottlenecks, you've got evidence to take to management.

Eric Kidd
Saturday, April 12, 2003

At a job I worked at, we did financial transactions using XML, and handled a whole hell of a lot of them per second. +1 for the biggest bottleneck likely being the pipe, not the XML parser (especially if you use a fast, forward-only parser).

Brad (dotnetguy.techieswithcats.com)
Sunday, April 13, 2003

It doesn't matter. Just make sure you can change it. Isolate the transmition level on both sides and gather data BEFORE writing custom, bandwidth-/speed-tuned code. You can get great speed custom-writing and tuning every message, but why bother if alphabet soup will get the job done? Hell, go write it in asm...

I'd suggest doing the least work you can to make it function. It shouldn't take long to change well-factored code if it proves too slow. See what the priority is: it *functions* now, or *functions perfectly* in a month (or whatever). That may make your decision for you.

Mike Swieton
Sunday, April 13, 2003

Large TPC-C benchmarks routinely turn in hundreds of thousands of TPM-C (transactions per minute), using HTTP with HTML output. By the time you strip away the benchmark fog, this often comes out at 10,000 transactions per second or more.

So, it *can* be done with standard protocol and tools.

Before deciding that your situation is differnet, I would build a performance prototype: a driver that sends nonsensical requests of about the complexity that you envisage, and a server that does about the right amount of application work, then sends a nonsensical reply of the right complexity.

Only try nonstandard protocols if the above won't perform satisfactorily. Understanding which parts of the above don't perform will be key in deciding where to depart from standards.

Jim Lyon
Sunday, April 13, 2003

Emmet,

1) Pooling is a good idea but it works only for connected protocols (e.g. database connections).
2) A proprietary protocol is ok as long as it doesn't have to go through firewalls and it is stateless (ends do not synchronize their states).
3) Data compression is usually sufficient.

J2EE gives you a good framework, less a few things here and there (IMO EJB is something to stay away from).

Leaving standards behing comes for a huge price, so be careful.

Cheers
Dino

Dino
Monday, April 14, 2003

Don't go binary. It will bite you later. Stick with XML, if you get bandwidth rather than latency problems you can always add compression later (and XML will compress well with most compression schemes as the tags repeat).

Peter Ibbotson
Monday, April 14, 2003

Let the code decide.  Write a test using J2EE or whatever you're most comfortable with.

Brent P. Newhall
Monday, April 14, 2003

Make sure you understand your throughput and latency requirements, and then make an informed decision.

"Lots of XML transactions per second" is rarely above 1000. Which is usually fine, except it you have to do 100,000 per second - which sometimes is the case.

Anecdote (semi on-topic)

By dropping all standards (communication, storage, databases, etc.) and using a tight solution, I was able to reduce a computation from over 6 hours of 100% CPU time to less than one second (yes, really -- lots of preprocessing required, but the computation that matters takes one second). Among other things, this required integrating a distributed process into one process so that everything would work within memory and without any i/o overhead).

When I first set out to do that improvement, no one thought this would be useful because "we only run one computation per day". But that was a cause and effect thing - only one computation was run _because_ it took so long. Now it is run within a loop, in the order of 10,000 times a day, in multiple scenarios (And I get complaints it's a little slow...).

So, on one hand, no formal requirement analysis would have indicated that standards need to be avoided - after all, one run per day was achieved with all-standard tools.

On the other hand, the improvement actually changed the way the tool was used in such a way that made clear that, as originally designed, orthodox design decisions had led to an (eventually) unacceptable inefficient implementation.

Ori Berger
Monday, April 14, 2003

*  Recent Topics

*  Fog Creek Home