The Community Forums

Interact with an entire community of cPanel & WHM users!
  1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

getting crazy with backups

Discussion in 'General Discussion' started by alexd, Oct 30, 2005.

  1. alexd

    alexd Well-Known Member
    PartnerNOC

    Joined:
    Dec 30, 2003
    Messages:
    46
    Likes Received:
    0
    Trophy Points:
    6
    all of my servers with cpanel got high load and cpu when backup hosting accounts... I have checked memory, hd's, updated whm, etc...

    but most of nights when backup is running, load average goes up to 30 or 40!. For testing I kill the current proccess of "pkacct" and the server goes down to 1 or 2 of load average as normal.

    All my accounts have normal files, nothing more than 10Mb, and the final average tar.gz per account is about 200-300 MB... normal!

    So, i think the problem is focus in "pkacct". I do not know how you guys do the backups normally. For me it's a crazy task which I have to monitor myself many dawns :P

    Thank you for any help
     
  2. dave9000

    dave9000 Well-Known Member

    Joined:
    Apr 7, 2003
    Messages:
    891
    Likes Received:
    1
    Trophy Points:
    16
    Location:
    arkansas
    cPanel Access Level:
    Root Administrator
    what method are you using to store the backup files ?

    2nd drive,ftp,nfs ??
     
  3. alexd

    alexd Well-Known Member
    PartnerNOC

    Joined:
    Dec 30, 2003
    Messages:
    46
    Likes Received:
    0
    Trophy Points:
    6
    In some cases same drive... at /backup
    In some cases in a 2nd HD...

    It happens in both cases. I have something wrong... but what I'm worried is that it happens in all servers :(
     
  4. alexd

    alexd Well-Known Member
    PartnerNOC

    Joined:
    Dec 30, 2003
    Messages:
    46
    Likes Received:
    0
    Trophy Points:
    6
    updated info

    I have checked "Big Sister" alarms of load average (i'm silly :P)
    Ok... "pckacct" script is ALWAYS running and eating cpu when load goes beyond 2.0

    ok... pckacct is the problem...

    how can i fix it ? :P how can i know what exactly disturb there ?

    i'm still thinking is gzip...
     
  5. jeroman8

    jeroman8 Well-Known Member

    Joined:
    Mar 14, 2003
    Messages:
    410
    Likes Received:
    0
    Trophy Points:
    16
    It is TAR and Gzip causing high IOwait so the servern load gets really high when backup
    pretty big accounts. Also if theres big databases it will take it's time and load will go up.

    I also get very high load when pckacct is running and thought you could control
    how/when backup should stop becuase of the load but that only applies to the
    system and not the load it's causing itself it seems. Or it just have to finish what
    it's doing at the moment maybe and then it will wait until load goes down.

    Anyway - it's a huge issue for me and my servers and clients.
    I really need FTP backup cause I'm running RAID and can't do incre.... backups.
    I wonder if it's possible at all to do it differently, my guess is not.

    But how do the majority handle this question - it must be a problem many of us face ?
    My backup starts friday 10PM and can go on until late saturday night.
    Maybe 30GB of files and a few gigs of mysql.
     
  6. kernow

    kernow Well-Known Member

    Joined:
    Jul 23, 2004
    Messages:
    865
    Likes Received:
    9
    Trophy Points:
    18
    cPanel Access Level:
    Root Administrator
    Yes, the main cause is GZIP rather than tar i think. We have dual cpu's but it still takes a couple of hours to run.
    I guess another way would be to get a huge backup drive ( 300GB ?? ) and then you could disable compression for backups.
     
  7. chirpy

    chirpy Well-Known Member

    Joined:
    Jun 15, 2002
    Messages:
    13,475
    Likes Received:
    20
    Trophy Points:
    38
    Location:
    Go on, have a guess
    Yup, that's the only way you're going to relieve the system load of backups. They're intensive by their very nature. They'll consume resources based on the tightest bottlneck, be that IO or CPU. Using incrementals or modifying the backup script to tarball and not compress are the main two ways of reducing CPU overheads from gzipping - that and throwing hardware at it. If CPU loads are low and IOWAIT is high, then a faster disk subsystem would also help.
     
  8. jeroman8

    jeroman8 Well-Known Member

    Joined:
    Mar 14, 2003
    Messages:
    410
    Likes Received:
    0
    Trophy Points:
    16
    Can you add a third disk as backup on a RAID system ?

    Taking away gzip from the backup script would be really nice.
    Also - just backup /var/lib/mysql - instead of doing dump would probably reduce
    the load of huge databases.

    I think I will start to look att the script and try to remove compression.
    Maybe running a simple script to backup /var/lib/mysql seperatly and
    the remove mysql from the regularly backup.

    hm..
     
  9. mike25

    mike25 Well-Known Member

    Joined:
    Aug 29, 2003
    Messages:
    83
    Likes Received:
    0
    Trophy Points:
    6
    Location:
    Raleigh NC, USA

    Very interesting idea, I have tons of HD space on my backup drive and even with a dual opteron machine backups will take 16 hours to complete, even after disabling the largest accounts on the server. Any hints on how I could disable compression in the backup script? I had a look through /scripts/cpbackup but I am not the best shell coder so it did not seem obvious as to where I could make the changes? Maybe I should just be using the Incremental backup method.
     
  10. Finley Ave

    Finley Ave Active Member

    Joined:
    Feb 28, 2004
    Messages:
    37
    Likes Received:
    0
    Trophy Points:
    6
    Location:
    San Ramon, CA
    that's what I do. My backups have gone from 2 hours to 10 minutes. I can't see any reason to do the compressed backups as long as you have the space for the uncompressed. Just seems stupid, gzipping the same files every backup even when they haven't changed.
     
  11. mike25

    mike25 Well-Known Member

    Joined:
    Aug 29, 2003
    Messages:
    83
    Likes Received:
    0
    Trophy Points:
    6
    Location:
    Raleigh NC, USA

    Yeah I ran my first INC backup last night, it took around 2/3rds as much time. Now that the incremental is in place though I am hoping for faster processing in the future. To combat the old files in the backup, I think I will set cpanel to only run daily backups, and then I will ncftp my daily directory to an offsite FTP backup to create a weekly backup as I did with the compressed backups. After that I will completely remove the inc backups on the server once a month in order to refresh them to make sure I do not get alot of old files in the backup. I wish I had thought of this sooner. :)
     
  12. jeroman8

    jeroman8 Well-Known Member

    Joined:
    Mar 14, 2003
    Messages:
    410
    Likes Received:
    0
    Trophy Points:
    16
    Since incemental backups is not possible with FTP backup I wonder if following would be possible and atleast reduce a little load as it compress less.

    In cpbackup it say: cpusystem("gzip","-f"

    What if I change this to cpusystem("gzip","-f1"

    -1 is faster compression
    -9 is best compression

    so I guess when there's no -1 or -9 it's a deafult in between value.
    So adding -1 will speed up the compression and reduce load.

    I will check how much faster but if anyone know modifing the cpbackup script this way
    will work please let me know, so I don't break anything.
     
  13. Drew Nichols

    Drew Nichols Well-Known Member

    Joined:
    May 5, 2003
    Messages:
    96
    Likes Received:
    0
    Trophy Points:
    6
    Location:
    SC
    Like a lot of people in this forum, we were having load issues. We setup NFS to another server in a nearby cabinet (10 Mbit link between them - if I put in a 100 Mbit switch between them instead of our old 10 Mbit core performance would obviously be much better)...

    We are now using the incremental non-compressed to the NFS. The system uses virtually NO load. At the moment, I'm running the backups and load is under 1. Try that with compression!

    It's also much easier to pull a file now - we're saved the uncompression which on a big site takes a LONG time. :)

    Drew
     
  14. jackie46

    jackie46 BANNED

    Joined:
    Jul 25, 2005
    Messages:
    537
    Likes Received:
    0
    Trophy Points:
    0
    One our new dual opterons backing up 247 websites at 1am is not a problem. The load is barely .30. Granted we have dual SCSI's too but even on our P4 2.6's its never higher than 1.2.
     
  15. Drew Nichols

    Drew Nichols Well-Known Member

    Joined:
    May 5, 2003
    Messages:
    96
    Likes Received:
    0
    Trophy Points:
    6
    Location:
    SC
    IS the dual Opteron as good as they claim?
     
Loading...

Share This Page