
I am running a hosting server currently equipped with a backup diesel generator. Since the blackout, I have been considering a backup high availability system, where if one would go out, the other would pick up the slack. I have been told that this is impossible, and can only work if I use a high availability scheme which would always keep one server running.
It's called quasi real-time backup with redundancy failover...
I have been told: "The expensive version of that would be to use the akamai network, you could have both geographical load balancing and near-instant failover.
That's the hardware version... yikes. Count your pennies, you will need them.
Otherwise, Youd have to build a cluster to start with, there are a lot of ways to do this, and it depends on the service.
A cluster is not necessary, although I've heard the word "cluster" loosely used to decribe what you want. IIRC there is a distro out (I have no details, this is from memory) that does this pretty much automagically.
WWW for example is much easier to do than mail, you can use rsync to keep the content up to date.
Unless those websites are glued together by a database. Then it becomes more complex...
Logfiles on the other hand would have to be recombined and processed later on to do statistics.
Log to a seperate log machine and you don't have to worry about it. good idea on an Enterprise network anyways...
Another cheap way to do this would be to set up a bunch of squid servers to proxy traffic to your box. This wouldnt do anything for mail though, to keep that together you'd need some kind of network file system, NFS, AFS or Coda (qmail is NFS aware, which helps). You'd only need to store the user mailboxes (/var/qmail/mailnames) on the network filesystem, Id keep the queue on a local disk for performance reasons.
Just settle for using Maildir. Anything not completed will be tried again by most mailservers. Using a short (1 min) backup window will come in handy here, so only the mails completed within that magic minute will be lost until synced back into the mix after bringng the first machine back to life.
For content, I'd designate one system as the designated "master" for inbound content (ftp, database, etc), and everything else mirrors that (rsync, replication for mysql and postgres, etc).
Just directing to an IP is good enough... the failover scheme takes care of all the rest.
Databases would be the most complex component, you'd have to work out your replication strategy to break out how to do the inserts/updates without each system stepping on each others toes."
Does anyone have an idea of how I could offer live (or rsynced to 2 hours) backup/live services for hosting. I would like if one server went down, the other would take over. Is this possible?
I know you can do it to within as little as one minute depending on the data you're backing up. The setup is quite simplistic: One machine that holds the data master copy, one for the backup server, and a third to see to the switching. Sysadmin Mag had an article about doing thsi last year (July 2002) "An Economical Scheme for Quasi Real-Time Backup" by Leo Liberti and Franco Raimondi. I don't think the article can be found on the net, but I happen to have a copy that I can photocopy for you if you'd care to come pick it up. That will take care of keeping the 2 servers sync'ed. Now you need a fast failover plan. Here's a link to another article that's pretty simple to follow. "Quick Network Redundancy Schemes" by Leo Liberti http://www.samag.com/documents/s=1152/sam0104a/0104a.htm Liberti co-authored the first article and also combines the 2 systems, which is pretty simple, actually. HTH -- Keith Mastin BeechTree Information Technology Services Inc. Toronto, Canada (416)696 6070 -- The Toronto Linux Users Group. Meetings: http://tlug.ss.org TLUG requests: Linux topics, No HTML, wrap text below 80 columns How to UNSUBSCRIBE: http://tlug.ss.org/subscribe.shtml
participants (1)
-
kmastin-PzQIwG9Jn9VAFePFGvp55w@public.gmane.org