Fog Creek Software
Discussion Board




Asynchronous lazy load?

Does anyone know of any good strategies for implementing a lazy load asynchronously? I need it to not block.

A summary: lazy load is a technique described in Fowler's Patterns of Enterprise Application Architechture. The point is to load a few objects from the database, and then when you request a member that's not loaded (group.getMembers(), for example), it will transparently go to the database and get it. This way, if you don't need it, it was never loaded, and if you do, you can defer loading until necessary.

My problem: he describes four strategies for implementing this, but all these require that the get operation block for a database access. To go back to my group.getMembers() example before, he says that when you call getMembers(), it sees if it's loaded, and returns the collection if it is, and if not, it hits the database, maps the data to objects, and then returns the newly loaded collection.

I can't do this. The problem is that the connection with the client is very high latency, so I can't just have my client program block indefinately while it waits for the data. As far as I can see, this means I can't use a transparent blocking method like Fowler's lazy load.

I've come up with a few solutions involving some really ugly stuff that I really don't want to implement, but right now I see no solution.

I realize that writing a rich client instead of a web client has disadvantages, but there are reasons for me to do so, so let's not debate that. It will probably be a web client eventually too, anyway.

Thoughts? Someone, somewhere, has to have run into this before.

Mike Swieton
Monday, March 03, 2003

Just as a note: the current solution i've been considering is using something resembling the 'state' pattern to implement the unloaded-object stubs and switching state via a singleton when it hits the client side and using checkpointing to continue the operation where we left off when the appropriate data arrives.

Way way way too complex. There must be something better. This is why I'm so desperate for an answer that isn't 14 and a half classes of OO goop.

Mike Swieton
Monday, March 03, 2003

What about doing to load on a separate thread. Then you can block that thread without touching the rest of the application.

You'll need some functions to move the data returned back to the main thread, but that shouldn't be too much trouble.

I did this to handle slow connections with large datasets in a browser. It would return x rows and the append more rows when the user clicked a button. I pulled the records from the database in a separate thread and then tossed up events to notify the main thread. This left the data in the main thread available to the end user during the fetch and only locked the system for a short time to append the new data to the existing data.
To the end-user the application never stopped. I also added a browser style image that showed them the system was fetching data.

Marc
Monday, March 03, 2003

Can you tell more about the context of the problem you are trying to solve?  Is it okay for the user to click something that is lazy loaded and then go do something else while the load happens?  Will the user get a notification when the load is finished?  Do you need to load related information, assuming that "if a user clicks here, he'll very likely also want to see this other information shortly thereafter?"  In other words, what does your problem look like from the user's perspective?

Ajax
Monday, March 03, 2003

Doing it in a seperate thread is perfectly feasible, and in fact what what I plan.

Here's the catch: right now I'm using Java's built in object serialization to do the network. The consequence of this is that my objects (say, User) is implemented the same on the client and the server.

The issue this brings up is that the server must not ever block waiting. It does not use the thread-per-connection model, so blocking would have a large number of users waiting.

The client's behavior should be this: Say I pop up a view of all the file objects in the database (file name, location, other stuff). If a particular bit of data isn't loaded, it's left blank and filled in when the data exists. This is easy enough to implement by having all the clients objects stored in and reference a local hash associating IDs with objects (since I always have the ID of an object, this is feasible).

The issue is that the above two approaches adds code to the object on the client side that is dependant on the client, because I have to write different accessors on the client side (since they must know about the local cache that the server side must not).

The problem is that the server and client side objects must differ for the simple solution to be feasible.

I don't want to do this, because it seems to me that I would have to rewrite the network layer by hand to instantiate different objects on the client side (totally custom code). If there's an elegant solution that means I don't need to rewrite my serialization system, I'd rather use that.

Mike Swieton
Monday, March 03, 2003

Trying to Keep It Simple by reusing the data objects on the client and server is a good design approach.  A lot of people are accomplishing this in Java using Aspect Oriented Programming.  AOP is an approach to programming where you add system-level functionality (persistence, logging, security) using an Interceptor-like design pattern that intercepts calls to the business objects.

An easy way to do this is just to use an Interceptor design pattern - a standard pluggable interface that the business objects call before or after being invoked.  The interceptor on the client side can do lazy server-side fetches, and the interceptor on the server side can do the database lookups. 

More complicated Java AOP techniques use dynamic proxies or compile-time code generation.  One populare one is Nanning:

http://nanning.sourceforge.net/

Colin Evans
Monday, March 03, 2003

It sounds like you never want the client to load the data from the database.  This is simply not possible.  Lazy loading can defer the wait, but the load still must be performed at some point.  And if get() is called before the load is done, you better block then.

As for the serialization, I would use one class (which would be the same client/server) for the data transfer, but it might have a wrapper class - the proxy - on the client that would be aware of caching and late loading.

Brian
Monday, March 03, 2003

Brian:

Well, as for the client making requests of the database, it won't be direct, but what difference does it make? I send a request for data to the server, and I get data back. I don't see the importance of the distinction.

As for the proxy, that is essentially what I've settled on as a solution. It complicates things some, but would better solve the problem than anything else I've seen.

Colin: Is that basically what the interceptor pattern you describe is? That's what it sounds like, but I don't have POSA2 to check (Looks like a good book though, sometime when I want to spend the money...).

Thanks for your suggestions!

Mike Swieton
Monday, March 03, 2003

Hmmmm almost without thinking, how about a select() in a piece of middleware?  With the client polling for the result set.  You can still combine the laziness using a queue at the client connector end.

So client needs data stuffs message in queue, queue dispatches request, middleware spins through a select() servicing those that send messages, returning data to those that have it available, kicking off the query, more likely stored procedure, for those intiating data.

Queue picks up status returns and data, etc, etc.

You could use threads for any part within that of course.  If you want to cross threaded data then maintain a common data pool, though that will always need some kind of semaphore.

Simon Lucy
Monday, March 03, 2003

Don't know too much about what you guys are talkinga bout, but it seems like your trying to save Memory resources and offloading it to the CPU.  This isn't good, imho.  Hardcore database stuff can still take a TON of CPU, but come on, how often to you run out of ram these days.  Maybe i'm not understanding the purpose of what your tryign to do.

Vincent Marquez
Monday, March 03, 2003

I believe he's trying to save or conserve time, so to speak.


Monday, March 03, 2003

I'm not really that concerned about optimization. I'm confident my server end will scale well, and as for the client, I know I should be fine, and I'd have to massively screw up for it to really hammer a system, just due to what it's doing.

I was just looking for a way to reduce the complexity of the code, and I think I've gotten a good enough couple of strategies to pull that off, but before it was looking quite bad :)

It'll still be a bit ugly, but a lot less than it could have been :) It'll be a bit of work to implement, but...

Mike Swieton
Tuesday, March 04, 2003

All I'm saying is that even with asynchronous loading, your accessors will always have the possibility of blocking until the load is complete, and there is simply no way to avoid this.

Implementation should be easy, with the exception that since your load can fail, all of your accessors can fail, which causes the error handling code to be more spread out than it would be if the load were done synchronously.

Brian
Tuesday, March 04, 2003

Maybe you should try a greedy approach instead. That is, as soon as you access a record, a thread starts downloading all available data, instead of waiting until a user tries to access it. If the data doesn't get used, you simply discard it.

Frederik Slijkerman
Tuesday, March 04, 2003

(First off, I agree with Frederick - maybe "greedy loading" is really the way to go.  But I'd recommend against if you're really trying to scale a multi-threaded server.  It's good for performance, but not for scalability.)

Perhaps I don't understand the issue here, but if the server can create a connection back to the client, as needed, this isn't a hard problem.  (Perhaps it can't?) 

Pretend like the network communication issue doesn't exist, and that all you have are objects.  All you do is have all your methods that don't want to block take a parameter that is a response object.  The response object has a method that will accept the data, once it is loaded, and do whatever makes sense with it.  (You should probably make an interface for this, and implement it as needed.)  That way, the server gets a response object, or a set of response objects, with the request, and it can put any data it has available into the appropriate response objects, and then load the other data on a new thread, which would then fill in appropriate response objects when the data was finished loading. 

This is all hinged on the idea that the server can initiate a connection to the client, but if you are building a "rich" client this shouldn't be a problem.  The idea abstracts out very easily to network communication.

Nathan Arthur
Tuesday, March 04, 2003

*  Recent Topics

*  Fog Creek Home