Fog Creek Software
g
Discussion Board




Web servers cluster : where do I store web files ?

Hi

I'm planning to build a 2 web servers cluster to make my web application more robust.
Everything is ok (load balancing, failover, session management, ...) but I'm not sure where to store the web content (asp, php, jpg, gif...) !

Each server must access the same files to provide the better quality of service.
We change the files many times a day, so we need the directories to be synchronized in permanence

So, many solutions are available :

Storing the files on a third server, using microsoft file sharing system (\\server\folder) and a lan 100 mbps connection.
It works quite well... but the processing times of the pages seems to double (and worse) 150ms for local files... means 400 ms of a distant server

Replicating the files : we need big hard disks on each front server (ok, it's not so expensive). The main problem is to find a way to have perfect copies of the files in permanence (is the expression "in permanence" ok ?)
using Windows "DFS" means having an Active Directory system, with a Domain, a domain controler... things we do not have for the moment... but we can use if it's a good system.

Buy a scsi hard disks bay... seems perfect but costs $6500

So if you were me what solution would you use ?

Some information about the web site :
20 000 sessions a day (peak usage)
2.5 millions of pages by month
the front servers could be something like 1 or 2 Xeon 2.4 or 3.0 ghz, 2gb of ram

Sorry for my bad english, it's not my native language.

Olivier B
Friday, September 26, 2003

Both have their own copies, and use source control.  So you'd have a third server where you'd keep the full CVS (or whatever your favorite flavor is) tree and develop off of that.  When you are satisfied its production material, tag it, and check out that branch to each of the production servers.

You can even write a little program to login to each server and checkout the tree at the same time, so as to have the least amount of non-sync between them (yes, bad grammar, I know).

This scheme could be made better with test servers, etc.  But I think just using cvs or another source control would work fine, and it would be easier to add another machine in in the future should you need to (just add that to the script).

Andrew Hurst
Friday, September 26, 2003

An external disk subsystem that both nodes can use would be the best way to go.

If you want failover, you can not have disks within the machines, since if the power fails to one of them, you loose whatever disk is inside that machine.

Replication gives you headaches and introduces the risk of getting a different set of files on one than the other in case replication of files fails (temporary network outages, and network related problems).

A 3rd server would introduce a single point of failure. What is the point of having 2 machines load balanced and all nicely clustered if they depend on a third machine?

Patrik
Friday, September 26, 2003

At the moment our office and our cohost is going with the "the webserver should access files stored on local scsi drives" style of serving. Because there are cold stand-by and hot stand-by servers, the minted master pages has to be replicated perfectly to all the webservers. We are trying out a few replication software that lets us transfer files properly. Peersync is the one we are evaluting at right now.

The only way to reduce 150ms is to change protocols. Networks are faster than files, but TCP/IP is not faster than a local file open call. This is doubly so if the file in question is already be cached by your Windows kernel.

There is one way to solve this problem and still let the files stay on a single networked file server but it would required wholesale caching of frequently access web views or web pages, but that eats up a lot of ram. If you are worried that the page accesses to your site is evenly distributed, you'll need to partition the caching over several servers and teach the page cache manager where to go for the cached pages.

-- David

Li-fan Chen
Friday, September 26, 2003

you can look into rsync to sync the filesystems

http://samba.anu.edu.au/rsync/

joe
Friday, September 26, 2003

Is there any good how tos for rsyncing on the windows platform?

Li-fan Chen
Friday, September 26, 2003

IMHO, server clusters are more about scalability than availability. Two web servers in a cluster should NEVER be considered redundancy. Why? Because in my experience the #1 point of failure for a website is network-related.

Hint: Two webservers in a cluster must, by definition, be on the same network segment.

So when that network problem happens, they're both gone.

You cluster for scalability - load balancing, etc. And the smartest way to do it is to have two identical independent systems. I concur with the CVS solution - your publishing script simply pushes the pages to two locations. You need source control anyway, so no big deal.

Now for availability, you duplicate your solution at another physical location. :-)

Philo

Philo
Friday, September 26, 2003

Philo.. we have two sites.. sorry to not point that out... not only are they in different cohosts.. one is situated in a counter less likely to be nuked.

Li-fan Chen
Friday, September 26, 2003

Heh - good answer. I was really addressing Patrik's comments re:failover.

Philo

Philo
Friday, September 26, 2003

the main raison for why we choose cluster, is because we use colfdusion MX... which is a great tool, but with lots of imperfections.
The main one is the fact that in a time that vary from 2 hours to 2 weeks, with no reason, the server crashes. No matter the code is build, the amount of ram, the server has to crash :-(
The problem is well known, and many people developping coldfusion mx application have it.
When CF is crashed, nothing seems to be able to make it work. The best thing would be a physical reset of the server, but it means 5 minutes of unavailability... which are imho too much.

So... a cluster...
The problem of the disks is really annoying, because of the way we actually access to the files.
The cfm, jpg, ... files are mainly send from the developpement server to the production one using ftp protocol.
But we aslo have a back-office system for non developpers people, to allow them the publication of pdf files, screenshots, ... using an HTML file upload system.
The file is uploaded to the server, then stored on a specific directory.

So the CVS isn't enough for our problem. It solves developpement organisation, but not the usage of the back-office.

So a replication must be in both ways, that is to say a perfect replication, where files uploaded on server1 have to be transfered on server2 and conversely.

I looked for information on the windows replication system. It seems to answer my problem, but needs a Domain Server, and all the AD stuff, a bit boring for only 2 servers.

I'm not sure, but I think that the scsi bay could only be accessed by 2 servers. If a day I need a third frontal, how should i do ?

The caching method is a great idea, because many of our pages could be cached. We are a sort of small amazon.com with many "product pages" ... the only thing the server have to to is to check the date of the file on the distant server and if it's the same that the one in cache, no need for transferring it.
But i'm not sure, it seems that the main problem of the placement of the files on a distant server is the access to the files, not the transfert.
For example, I tested a page which is composed by only 3 different files (using includes) it is very fast to recompose.
But a page with 7 or even 10 templates takes really more time, because of the need to call a file, analyse it, then recall another file, so on...

so... no really good solution... anyone has expierience with DFS (file replication system used in combinaison with and Domain server) ?

Olivier B
Sunday, September 28, 2003

Off topic, but we also had many problems with Coldfusion MX server crashing and managed to fix them. Our problems basically boiled down to three things: we had limited SQL licences, we had to add the SQL server's ip address to the hosts file or JNDI lookup failed and the Verity search engine (not the K2 one) was causing crashes. Also we initially did an upgrade instead of a fresh install. Anyway, it's all fixed and we haven't had a crash for months. You don't have to put up with the crashes - you may have to rebuild your entire system from scratch, but it should be possible to stop them.

BTW, if you've already done a rebuild, then maybe your sys admin did something that seemed fine for CF5, but doesn't work for CFMX. Remember, this is a Java app, it works differently.

Not The American President
Tuesday, September 30, 2003

Have you guys looked at clustered filesystems?
http://www.polyserve.com/products.html
http://www.sistina.com/products_gfs.htm

ignatius
Saturday, October 4, 2003

Ok

after a meeting with our hosting corporation, they said that the best way for us is
1/ use a hardware load balancer which regulary check the health of our front servers.
2/ store the files on a Fiber channel bay.

they rent both...

I think we have to use page caching evertyime we could to reduce the disks bay...

now it's time to work on clustering coldfusion, jrun severs, ... but it's another story !

Olivier B
Wednesday, October 8, 2003

*  Recent Topics

*  Fog Creek Home