How can I reduce size of staging backup drive

jimhermann

Well-Known Member
Jan 20, 2008
76
3
58
cPanel Community,

I am transporting my backup files to Amazon S3 for storage.

How can I reduce the size of my backup drive? It is empty most of the time. My backups are 330 GB and my backup drive is 600 GB. This mostly unused drive is costing me about $60 per month.

Thanks,

Jim
 

jimhermann

Well-Known Member
Jan 20, 2008
76
3
58
Sounds like a question better asked over there don't you think?
I was hoping we could figure out a way to write directly to Amazon S3, to store and transfer backups in smaller chunks, or transfer backups as they are being made, rather than later.

Any ideas?

Thanks,

Jim
 

jimhermann

Well-Known Member
Jan 20, 2008
76
3
58
Infopro,

What if I created an every-5-min cron job to periodically sweep the backup files to the Amazon S3 storage?

Like this:

aws s3 mv /backup s3://<bucket-name> --recursive

or just the tar balls:

for i in /backup/*/*tar; do aws s3 mv $i s3://<bucket-name>;done;
for i in /backup/*/accounts/*tar; do aws s3 mv $i s3://<bucket-name>;done;
for i in /backup/weekly/*/*tar; do aws s3 mv $i s3://<bucket-name>;done;
for i in /backup/weekly/*/accounts/*tar; do aws s3 mv $i s3://<bucket-name>;done;
***

Thanks,

Jim
 

cPanelMichael

Administrator
Staff member
Apr 11, 2011
47,884
2,254
463
I was hoping we could figure out a way to write directly to Amazon S3, to store and transfer backups in smaller chunks, or transfer backups as they are being made, rather than later.
Hi Jim,

This already occurs to some extent. The backup archive is queued for transport to the remote destination once it's packaged into a .tar.gz file, and removed once the transfer is complete. However, you may notice that more than one archive exists on the local server during the backup process because only one archive is transferred at a time.

The custom script you referenced would lead to errors because it could attempt to transfer over an incomplete archive. The best approach at conserving disk space on the local system would be to use incremental backups. Remote incremental backups are currently only supported with the "rsync" destination type, however we have a feature request open that you can vote for at:

Incremental backup support for the Amazon S3 remote destination type

Thank you.
 

jimhermann

Well-Known Member
Jan 20, 2008
76
3
58
This already occurs to some extent. The backup archive is queued for transport to the remote destination once it's packaged into a .tar.gz file, and removed once the transfer is complete. However, you may notice that more than one archive exists on the local server during the backup process because only one archive is transferred at a time.
I wasn't using compressed backups, which meant that my backups were completed is less time (3.5 hours) and that my backup files were larger (300 GB) which caused the cpbackup_transport process to take longer (9 hours).

I switched to compressed backups and the cpbackup_transport process was able to keep up with the backup process. The file size dropped to 220 GB and the both processes ended after 8 hours.

I dropped my backup drive size to 300 GB.

Thanks,

Jim
 
  • Like
Reactions: cPanelMichael