Fog Creek Software
Discussion Board




Supporting a Web site when you're solo

So I'm thinking about creating and deploying a simple Web service/application kind of thing. I'm trying to think through a business plan -- nothing fancy, just a couple of pages laying out the basics. I'm doing this to force myself to look at the business in its entirity, not because I'm looking for investor money. In fact my intention is to bootstrap this, starting this on a relative shoestring and building it gradually, using agile development techniques to refine the site as the customer base grows, while continuing to support myself primarily with contract work (I'm a one-man shop).

Right now I'm still in the preliminary planning phases -- researching the market, talking to potential customers, prototyping some concepts. But there's one significant yellow flag that's cropped up, and I'm wondering if the fine folks here on this forum have any suggestions for how to work around it.

The issue is, as a solo operator how do I deal with the fact that running a Web site is a 24x7 business that may need intervention at any time? It's not that I mind having to wake up and fix something in the middle of the night if that's needed, but it's just not physically possible for me to be within, say, 15 minutes of an Internet-connected PC 100.0% of the time -- sooner or later I have to do things like get on an airplane. So what happens if the site croaks while I'm flying to the east coast or something like that? Of course I intend to test the site hard and monitor it carefully to make sure that it doesn't blow up unexpectedly, but you can never remove all possible modes of failure. (I won't be hosting this myself of course, but I'm more worried about application/database issues.)

I've thought of a couple of possibilities. One would be to launch the service in a "beta" mode and make it explicit that it won't be available all the time. (It's more consumer- than business-oriented and not really mission-critical like e-commerce, so that might not be a showstopper, though the thought of a user coming to the site and finding something nonfunctional does really make me cringe.) Another would be to try to find someone who'd be willing to let me pay them to be on call part-time for those times when I can't be near a machine. (Hiring another full-time staffer is out of the question in the near term.) Of course they'd need to be pretty knowledgeable about what's going on so they could actually fix anything that went wrong.

Is there anyone here who could offer their suggestions or experiences dealing with an issue like this? Any information or ideas would be helpful.

John C.
Wednesday, January 08, 2003

John - send an email w contact info. I may be able to hook you up w a guy I know who could perhaps handle this very thing. I won't be checking this email address till this evening (eastern, USA).

gottaBeAnonymous
Wednesday, January 08, 2003

I wouldn't worry too much about the website going down.  If it does go down, sure it'll make a bad impression on your customers, etc.  eBay used to be down several days in a row for hours when they were first growing right after the IPO.  You could do what they did and offer a discount or money back for the time the site was down.

The only thing you need to worry about right now is making the website/app popular enough.  Without that there's no need to even worry about the website going down.

By the way, I run a few websites with apps that run on the server, and it's just me and my business partner.  We try to get everything working again as soon as possible, but will do it when we wake up in the morning or get back on the computer.  The site has only been down for only an hour or two every year, so it's not that bad and hardly anyone gets pissed off although they may be slightly annoyed.

HeyMacarana
Wednesday, January 08, 2003

Select two cohosting facilities so you can
set up two servers. One of the servers needsy
to be on hot stand-by. You need to setup
firewall and VPN so that database and web
synchronization can happen in near
real-time. Just about all cohosting
facilities will a least reboot or switch a
tape 24/7/365. But not all have high tech
damage protection (chemical fire
extinguishers that won't soak your server in
water). And just about all are backed by
only one major back bone. So make sure the
two cohosting facilities you use are using
distinct back bones.

The cost of having two cohosting facilities
means twice the hardware, licensing cost,
and babysitting fee to the cohosting
companies. But if you are fighting for large
companies, that's probably the only way to
go, Fortune 500 won't rely on you for
something as crucial as a useful web
services for anything less.
Having a live backup (or near live backup)
buys you time to fix it within 24 hours or
even a week (assuming the problem is not
truely crucial). Try to place the servers
near your work place or near your customers.
But having to fly or drive hours may be
unavoidable.

Your hosting fees will be quite hefty,
unless you decide to use the cohosting
facility's firewall, you'll be setting up 1)
a firewall, 2) a database server, 3) a web
server, 4) an application server.. it could
be more per site, or less.

Chances are you'll also have high speed
internet to your work place in order to host
a development and staging server farm--(not
to mention ERP, groupware, intranet and
other application servers) but perhaps with
less stringent uptime requirements.

A development and staging farm is crucial
because it will make the difficult problems
improbable. By testing carefully and burning
in your products, you can ensure that there
are very few possible problems that can
happen once the product is mature enough to
be migrated to the production site.

Most administrative tasks can be made easy
to deal with the help of Microsoft Terminal
Services, if you must use remote X Window,
compressible X Window variants, or even VNC
that'd be acceptible too. This plus a few
phone number to the production site's
cohosting technicians will solve 90% of day
to day problems. Another plus is to write
watch dog scripts that will email to your
various mail servers and these email servers
forward the email to your PDA or text pager.
This gives you a early warning detection as
to what's going on. You can prevent
overflowing logs and hack attacks using this
technique.

There are some really great high level
system administration books on the market. I
think they would give you even more ideas.
Good luck!
-- Li-fan Chen

Li-fan Chen
Wednesday, January 08, 2003

Is there a way to reduce this situation furthur?

I think a big no-no is using only one cohosting facility or two facilities with the same backbone. Your clients will want an crucial function call to work just as well over the net as it does inline. Just make sure your customers can't blame YOU for their network connection.

But if you are caught setting up 3 virtual machines inside a fat Xeon box running VMWare Workstation or VMWare GSX Server that'd might be just fine. No one likes the idea of all the servers going down because VMWare crashed, but VMWare is not the sort of software to crash now days for no apparant reason. And who cares if all the dominos are down at one site when you still have a staging/development server back at the office for you to figure out what the bug is while an entire other site hums along?

If you can't buy more than one server per site, ensure that the server is using a good hard drive subsystem: The least is a UIDE/SCSI based hot-swappable RAID storage system. They are more costly than software RAIDs. I guess there must be several major vendors who are willing to will you such a configuration for a reasonable price. People will warn you against UIDE drives for very good reasons, so heed those warnings. Stick with brand names for reputation's sake.

Two of these servers at two highly reputable cohosting sites plus a staging/development server at your office on business Cable/DSL should be the bare minimum requirement expected by a reasonable customer. Get a 1888 number attached to a roaming cell phone. Or a good voice mail system.

I can't think of anything else, anyone?

-- Li-fan Chen

Li-fan Chen
Wednesday, January 08, 2003

john, I have been heavily involved with this type of work for about 7 years. Li-Fan Chen is wrong, do not think about any of the things he mentioned at the moment. Hey Macarena is right, the only thing that matters is getting users to use your system.

Thinking too hard about reliability is a good way to ensure that your project will never get off the ground.  I personally was involved with killing many businesses by overselling "reliability" during the dot-com boom, and am not wholly proud of my past actions. 

programmeur
Wednesday, January 08, 2003

Thank you all for the great responses. I'm emboldened by the comments by HeyMacarena and Programmeur. Their suggestions reaffirm my original thinking on the topic... and help squelch the fit of doubt I went into last night when I started thinking that customers would give up on me if they experienced any downtime at all. I started having visions of myself tied to a computer with a not-so-long leash for the next few years :-)

I guess I'd forgotten that even some of the big (and well-funded) outfits like eBay and Amazon have had their share of problems.

Anyway, I'm sure this will be only one of many opportunities I'll have to second-guess myself in coming months, but I definitely do not intend to let my occasional fears or concerns stop me from getting out there and making this happen.

Thanks again everyone!

John C.
Wednesday, January 08, 2003

I agree that you need to worry about content. I have my own hobby web site and used to worry about downtime. However, I found that when it did go down (for whatever reason), I would get a spike of vistors when it was back up. People are used to sites going down and will try again if they are interested. Focus on content!

Cheers,
    Sean

Sean MacLennan
Thursday, January 09, 2003

John,

Just wanted to add to the chorus of "just go for it!" voices.  All you need it users, you can throw money at hardware and redundancy later.  That's the stuff that pays for itself - at the point where you'd actually need that kind of thing, you'll be able to afford it.

Good luck!

aa
Thursday, January 09, 2003

Honestly, what are the chances of there being two John C's out there? Surely a million to one.

John C
Thursday, January 09, 2003

John C.

I would second what a lot of other users have posted. Do not go overboard with the failsafe spending. Use the money to develop the content areas of the site.

Having said that though, remember that four letter word, DATA.

Your site can go down, and people will still use it. Lose lots of data, and no one will touch you with a barge pole. Not sure what your service is, but DATA integrity is usually critical. If the Teller Machine network is not working, not so bad, but the bank had better remember how much I have when it comes back online!

What you can do is monitor your user's usage, and determine the right cost/benefit data backup policy. Financial transactions accessed very frequently, and you probably want to have as close to realtime-offsite backup.

A game's server where users typically access once or twice a week might only need weekly offsite backup.

RAID is good, and must be one of the first goodies you get when you move from shared hosting to your own server, but it will not help when your co-lo's facility or entire box is toasted in a freak accident.

tapiwa
Thursday, January 09, 2003

My word, what are the chances of there being 3 John Cs out there?

John C
Monday, January 13, 2003

*  Recent Topics

*  Fog Creek Home