allportpc said:
Is the DNS Clustering All I need to do to have all of the files/databases on one server synced up on a nother server. Reason being, I need to make sure I have a server available if the current one goes down. I have had some outages and need to make sure I can resolve them quickly. I know I wil have to have one of the nameservers pointing to the new server and change the timeout period that the dns looks for the second nameserver. Anything else? Is DNS Clustering what I need to be using?
rustelekom didn't answer your actual question
the answer is NO.. DNS clusteering is not ALL you need to do.
dns clustering will only insure that you have multiple dns servers providing the same information.
you'll have to work out another solution to copy the files/databases.
Some of the quicker solutions are to go with mysql clustering for the DB piece (there's a good howto here:
http://www.fedoraforum.org/forum/archive/index.php/t-109733.html but it doesnt work for the cpanel mysql, so you'll have to disable cpanel mysql and install a standard distribution, or figure out how to make it work right with cpanels mysql.) then do a periodic rsync of the filesystem (the frequency of the rsync will affect the amount of 'data loss' your users experience--this will depend on the number of files you're syncing, the average rate of change, and what level of protction you want to offer--probably the web space files wont change very often, and so daily syncs are probably reasonable, if the sites are database driven a mysql cluster should be in sync without data loss)
the primary concern will be email files. it may be valuable to sync /home*/*/mail/.../cur/ folders more frequently and sync the rest of the filesystem later. in this way, if they read the mail, it shouldn't be in a /cur/ folder, so if it's lost due to a hardware failure, it's bad, but not tragic. worse is for never read mail to vanish from the face of the earth. One other option for this is to modify exim to direct a copy of all (post filtered/spam checketc/etc) mail that's being dropped into a users mailbox to a script which immediately replicates it to the other server(s).
then you have the issue of updating the DNS entries when there's a failover.... the information in the DNS cluster will still deliver the OLD ip address.
you can do the following to update the entries.. you also need to change the serial number of the db file, which is not specified in this command.
cd /var/named/
perl -pi.bak -e "s/old-ip/new-ip/g" *.db
(you'll need to have a map of the old servers IP addresses to failover server ip addresses)
you then need to update /usr/local/apache/conf/httpd.conf to add the httpd entries for each host...most likely you'll have already configured this. when a new host is added, you should add it to the httpd.conf file on all machines with appropriate entries for that machines IP addresses.
it's ok if apache is configured to manage a virtual domain that's actually on another server... server requests for that hostname will only go to the failover apache AFTER dns entries are updates, and that's done after you're sure the original server is down.
in this case, you can run a load-balanced solution.
So, to summarize, the fast solution for 'fast manual failover' is to get mysql clustering running and rsync the user files. on failover, update DNS entries, verify httpd entries and restart apache, --there's also work that would need to be done for exim to deliver mail, but there are other threads that discuss that... if you've set up your failover as a secondarymx server, mail will get queued but not delivered until you update a few related files.
an ideal solution would be:
1) enable DNS clustering
2) enable MYSQL clustering
3) configure exim to replicate a copy of incoming mail to other clustered servers
4) rsync homedirs
5) configure apache on each machine to serve files for all clustered accounts
6) enable round robin DNS
There, you'd have a fully clustered HA system where the only real data loss concern would be for filesystem files which were updated since the last rsync...
unfortunately, doing this within cpanel would be difficult only because there's not, as far as I know, a way for cpanel to run a "post new account" or "post remove account" script...
you'd need this to have the process fully automated...this script would have to add the httpd entries on remote machines, change the database types from myisam to ndb, add the domain name to secondarymx file (or, if you're going to do allow mail deliver on all clustered machines via a 'replicate incoming mail' mechanism, add the appropriate links in the valiases directories), and do round robin dns.
what I'm envisioning for myself is to write a cron script which looks at /usr/cpanel/users for new entries, and uses that information to configure the clustered machines.
in this case, new cpanel users that were auto added by a self signup would have some lag time before they were made "highly available." this is probably fine since new signups likely aren't going to have their sites up and running instantly, this could probably be done daily and it wouldn't be a terrible thing.