New backup system and rsync

kernow

Well-Known Member
Jul 23, 2004
1,031
62
178
cPanel Access Level
Root Administrator
The new backup system in 11.38 creates a directory with today's date as the directory name, this makes it difficult using rsync to a remote server of the daily/weeklybackup as the backup directory name will be different each time, ie the dir name will change because the date is used for dir name so the entire backup dir would get recreated every time. The idea is to sync with the remote backups not recreate them.
Any ideas how to get around this?
 
Last edited:

kernow

Well-Known Member
Jul 23, 2004
1,031
62
178
cPanel Access Level
Root Administrator
I have answered my own question but I can't delete my own post! The answer is of course is to run rsync using something like;
#rsync -au /bla bla bla /backup/2013*/ /remote_backup_server/
 

JamesOakley

Well-Known Member
Apr 15, 2011
83
2
58
cPanel Access Level
Root Administrator
Or you could use something like this:

rsync -au /backup/`date +%Y-%m-%d`/* [email protected]

What I've found, though, is that the new backup system seems to zip the accounts differently, so that using --stats on rsync shows speedup to be very nearly 1. It's basically having to do a full copy each time, which is heavier on bandwidth.

I'm now trying to work out if it's the new backup system or if it's pigz. Before, I got speedup of 2-3.
 

kernow

Well-Known Member
Jul 23, 2004
1,031
62
178
cPanel Access Level
Root Administrator
Thanks for that JamesOakley :)
cpanel default compression for pigz is 6 and this setting actually increased the time taken for our backups to complete on 8 core servers. We lowered this to 4 which speeded it back up. (whm>>>tweak settings>>>compression)
Haven't noticed any delay in rsync backups local to remote server though.
 

JamesOakley

Well-Known Member
Apr 15, 2011
83
2
58
cPanel Access Level
Root Administrator
Thanks kernow

I'm sure you know this already: If you put --stats on your rsync statement, it prints out a summary at the end which allows you to see the total size of the folder being synced, the total size of the files which had changed, and the total number of bytes needing transferring. It also gives you the difference between the last two, which is the number of bytes that were sufficiently unchanged, within the changed files, that those chunks did not need transferring.

The ratio of total file size: transferred data then gives a "speed-up" amount. 1 would indicate that 100% of the data needed transferring. On the legacy backup system, you'd get a speed-up of at least 3 on most dates when syncing the whole cpbackup folder, because the weekly and monthly directories would be unchanged. In practice I got much higher.

Since backing up the new files, I've been getting values like 1.02. It's only higher than 1 because I've included a few tarred configuration files of my own, and they won't have changed.

I'll experiment with the amount of compression, as you suggest. It may well be that a higher pigz number leads to a compressed file that cannot be treated sequentially, which would have this effect.
 

JamesOakley

Well-Known Member
Apr 15, 2011
83
2
58
cPanel Access Level
Root Administrator
OK, another night, another backup log to look at.

It seems that both the legacy and the new backup systems use pigz. So in dropping from level 6 to level 4, both systems had the same benefit.

However, whereas the legacy backup gave me a speed-up factor of 7.7, I only got 1.01 again on the new backup system.

It seems that it's the way the new backup system works, rather than pigz itself, that prevents rsync from only transferring parts of a file.

Shame - that's one drawback of the new system.
 

kernow

Well-Known Member
Jul 23, 2004
1,031
62
178
cPanel Access Level
Root Administrator
We got a disapointing speedup of 1.04 on the latest rsync local to remote backup but this was after changing the compression level to 4 from 6 so this would have changed the file size significantly, so will run it again using the --stats switch to see if the time/resources improve. However as we mentioned before the time taken to do the local to remote backup hasn't really changed that much.
For us, what we would like to see is a reduction in the time and more importantly the resources that the new backup script takes to complete and lowering the compression level to 4 has achieved that somewhat, though more testing is needed to achieve the optimium.