Please whitelist cPanel in your adblocker so that you’re able to see our version release promotions, thanks!

The Community Forums

Interact with an entire community of cPanel & WHM users!

How can I reduce size of staging backup drive

Discussion in 'Data Protection' started by jimhermann, Oct 20, 2017.

Tags:
  1. jimhermann

    jimhermann Well-Known Member

    Joined:
    Jan 20, 2008
    Messages:
    62
    Likes Received:
    2
    Trophy Points:
    58
    cPanel Community,

    I am transporting my backup files to Amazon S3 for storage.

    How can I reduce the size of my backup drive? It is empty most of the time. My backups are 330 GB and my backup drive is 600 GB. This mostly unused drive is costing me about $60 per month.

    Thanks,

    Jim
     
  2. Infopro

    Infopro cPanel Sr. Product Evangelist
    Staff Member

    Joined:
    May 20, 2003
    Messages:
    16,309
    Likes Received:
    393
    Trophy Points:
    583
    Location:
    Pennsylvania
    cPanel Access Level:
    Root Administrator
    Twitter:
    Sounds like a question better asked over there don't you think?
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  3. jimhermann

    jimhermann Well-Known Member

    Joined:
    Jan 20, 2008
    Messages:
    62
    Likes Received:
    2
    Trophy Points:
    58
    I was hoping we could figure out a way to write directly to Amazon S3, to store and transfer backups in smaller chunks, or transfer backups as they are being made, rather than later.

    Any ideas?

    Thanks,

    Jim
     
  4. Infopro

    Infopro cPanel Sr. Product Evangelist
    Staff Member

    Joined:
    May 20, 2003
    Messages:
    16,309
    Likes Received:
    393
    Trophy Points:
    583
    Location:
    Pennsylvania
    cPanel Access Level:
    Root Administrator
    Twitter:
    There are many threads on this topic asking the same thing. Unfortunately, it's not possible to transfer directly as the backup needs to be created and then moved.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  5. jimhermann

    jimhermann Well-Known Member

    Joined:
    Jan 20, 2008
    Messages:
    62
    Likes Received:
    2
    Trophy Points:
    58
    Infopro,

    What if I created an every-5-min cron job to periodically sweep the backup files to the Amazon S3 storage?

    Like this:

    aws s3 mv /backup s3://<bucket-name> --recursive

    or just the tar balls:

    for i in /backup/*/*tar; do aws s3 mv $i s3://<bucket-name>;done;
    for i in /backup/*/accounts/*tar; do aws s3 mv $i s3://<bucket-name>;done;
    for i in /backup/weekly/*/*tar; do aws s3 mv $i s3://<bucket-name>;done;
    for i in /backup/weekly/*/accounts/*tar; do aws s3 mv $i s3://<bucket-name>;done;
    ***

    Thanks,

    Jim
     
  6. cPanelMichael

    cPanelMichael Forums Analyst
    Staff Member

    Joined:
    Apr 11, 2011
    Messages:
    44,344
    Likes Received:
    1,852
    Trophy Points:
    363
    cPanel Access Level:
    Root Administrator
    Hi Jim,

    This already occurs to some extent. The backup archive is queued for transport to the remote destination once it's packaged into a .tar.gz file, and removed once the transfer is complete. However, you may notice that more than one archive exists on the local server during the backup process because only one archive is transferred at a time.

    The custom script you referenced would lead to errors because it could attempt to transfer over an incomplete archive. The best approach at conserving disk space on the local system would be to use incremental backups. Remote incremental backups are currently only supported with the "rsync" destination type, however we have a feature request open that you can vote for at:

    Incremental backup support for the Amazon S3 remote destination type

    Thank you.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  7. jimhermann

    jimhermann Well-Known Member

    Joined:
    Jan 20, 2008
    Messages:
    62
    Likes Received:
    2
    Trophy Points:
    58
    I wasn't using compressed backups, which meant that my backups were completed is less time (3.5 hours) and that my backup files were larger (300 GB) which caused the cpbackup_transport process to take longer (9 hours).

    I switched to compressed backups and the cpbackup_transport process was able to keep up with the backup process. The file size dropped to 220 GB and the both processes ended after 8 hours.

    I dropped my backup drive size to 300 GB.

    Thanks,

    Jim
     
    cPanelMichael likes this.
Loading...

Share This Page

  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice