SOLVED Tweaking backup transport to be more aggressive?

Optimizr

Active Member
Aug 8, 2020
36
6
8
Yangon
cPanel Access Level
Root Administrator
I am using a same company for the server and backup storage. They also are in same region. Therefore, the data transport between the server and the backup storage can be super fast. When I tested transporting an archive file using rclone, the speed was up to 500Mbps but with the cpbackup_transport, it's only around 50Mbps. Relatively super slow I would say. This lead to following issue.

Since I have
  • more than enough RAM & CPU
  • speed optimized backup tweaking
  • files and directory exclusion for backup
the backup process is too fast. But since the transport is slow, archive files are built up quickly during backup period. Therefore, I have to keep additional 200-300 GB of free space all the time which can't be used for actual server data but just for temporary backup archives. I feel like I am wasting my money on this additional free space.

Is there any option to tweak the transport to be faster like tweaking the backup?
 
Last edited:

cPanelLauren

Product Owner
Staff member
Nov 14, 2017
13,296
1,271
313
Houston
Because of the way the cPanel backup process transports it wouldn't be possible to perform a direct transport (in effort to conserve space) there are a few Feature Requests for this but ultimately I think what you'd want to look at is something like JetBackup or another solution which does backups directly to a remote destination for this. As for the length of time for transport, the protocol you're using plays a huge part in this FTP being one of the slower ones. When you configured your backup destination what did you use?
 
  • Like
Reactions: Optimizr

Optimizr

Active Member
Aug 8, 2020
36
6
8
Yangon
cPanel Access Level
Root Administrator
When you configured your backup destination what did you use?
I am using S3Compatible Transport.

you'd want to look at is something like JetBackup
Say no more! I have bought it and giving it a shot. But I am a little disappointed since it doesn't support my current backup destination. Now, since the server and the backup destinations are not a same company anymore and also the regions became different, it would take a long time to backup the whole 1TB server. But I think I will be happy as long as the backup duration take less than 12 hours and do not hurt my server's performance. I don't think the latter would.
 

Optimizr

Active Member
Aug 8, 2020
36
6
8
Yangon
cPanel Access Level
Root Administrator
This is not working! Even worse! I don't think it is jetbackup but the upload is too slow (only around 10Mbps). My server location is in Singapore and the BackBlaze B2 is in EU Central. My backups are not even finishing and it is going to take more than 60 hours according to the calculation. I have some questions.

1. How can I cancel jetbackup license purchase through WHM interface and request for refund if possible?
2. I am thinking of writing a bash script that loop through all users and run /usr/local/cpanel/scripts/pkgacct then do the transportation with rclone for each account. Will that be a good idea? Any downside or taboo?
 

cPanelLauren

Product Owner
Staff member
Nov 14, 2017
13,296
1,271
313
Houston
This is not working! Even worse! I don't think it is jetbackup but the upload is too slow (only around 10Mbps). My server location is in Singapore and the BackBlaze B2 is in EU Central. My backups are not even finishing and it is going to take more than 60 hours according to the calculation. I have some questions.
This is literally the first time I've ever heard anyone have a complaint about JetBackup being slow. I'm sorry to hear that it isn't working for you.


1. How can I cancel jetbackup license purchase through WHM interface and request for refund if possible?
I'd suggest you contact [email protected] for this as they'd be able to tell you what your options are
2. I am thinking of writing a bash script that loop through all users and run /usr/local/cpanel/scripts/pkgacct then do the transportation with rclone for each account. Will that be a good idea? Any downside or taboo?
I'm certain there are others who have performed similar tasks rclone themselves has 3rd party integrations that include scripts (not specific for cPanel) that may be helpful I've made a nice backup script tutorial for rclone


There are a few forums threads as well this one being the most notable:

 

Optimizr

Active Member
Aug 8, 2020
36
6
8
Yangon
cPanel Access Level
Root Administrator
This is literally the first time I've ever heard anyone have a complaint about JetBackup being slow
I didn't mean it is totally jetbackup's fault but partially. Since it doesn't have S3 Compatible Destinations supported, I was forced to choose the one in the list which is far from my server. That's why transport was too slow and wasn't working out for me.

Thank you for the "Backup with rclone" link but I have written and used that kind of script when I was running incremental backup but now I am optimizing the budget usage so the backup files can only be kept at remote storage and thus the backup type has changed to "compressed".

For now, I have tested with following code

Code:
#!/bin/bash

bk_dir="/mnt/backup/";

while IFS= read -r user; do
    /usr/local/cpanel/scripts/pkgacct --skipbwdata --skiplocale --skiplogs --skipmailman $user $bk_dir && /usr/bin/rclone moveto /mnt/backup/cpmove-$user.tar.gz my-rclone-s3-remote:backup/daily/accounts/cpmove-$user.tar.gz;
done < "users_to_backup.txt"
Of course the hard coded user list input will be replace with actual loop. According to this trial, since the transfer speed was great, it would take only 10 hours for 1TB size of server. I will run with all account and post the result back here.
 
  • Like
Reactions: cPanelLauren

Optimizr

Active Member
Aug 8, 2020
36
6
8
Yangon
cPanel Access Level
Root Administrator
It works! The total duration for 900 GB server is 13 hours. I would try to optimize more and make it faster. The script solved my following requirements.
  1. Slow file transport
  2. (Thus archive file built up during the backup and) needing a lot of unusable storage spaces (around 400Gb in my case)
  3. Now I only need extra space of the largest account in my users list regardless of your total server usage (i.e; if the largest user's usage is 100 GB, I only need 100 GB free space for the backup to be run smoothly) because script wait for each other's job. Meaning, transport job would only start when the backup is completed successfully and the next backup would wait for the transfer of first backup to be finished and so on.
  4. In other words, it blocks each other. Therefore, your backup setting must be optimized to be fast and your transport must be fast too to reduce the total backup duration. My transport is fast because I am using buying both server and remote backup storage from same company and they both are in same region thus the transfer speed was up to 350 Mbps.
  5. I use these settings to optimize the speed of the backup process. The settings can be tuned at WHM » Server Configuration » Tweak Settings
    1. Number of pigz processes » 8 (since my server has 8 cpu cores)
    2. Compression Level » 9 (to make the transport faster)
    3. Extra CPU » 7 (original 8 cores + 7 extra cores = 15 cores which is a safe amount to increase according to my server)
  6. Furthermore, I use WHM's backup to backup the system and let it transport the system backup to remote so that directory structure would have created on remote which is needed in my script and also for the backup rotation setting. But then I disabled "Back up user accounts" and use following script to do the user accounts backup
Bash:
#!/bin/bash

home_dir="/mnt/home2"
bk_date=$(date +%F)

# ways to get users list

# list the home dir and get directory names which is same as user names
# users=$(ls -l $home_dir | grep ^d | awk '{print $9}')

# list cpanel's users and get the names only
users=$(ls -l /var/cpanel/users/ | awk '{print $9}')

for user in $users
do
    # use cpanel's pkgacct script to handle the backup job and transport with rclone
    # Use of && will make sure the transport starts only if backup job was done successfully
    /usr/local/cpanel/scripts/pkgacct --skipbwdata --skiplocale --skiplogs --skipmailman $user && /usr/bin/rclone moveto $home_dir/cpmove-$user.tar.gz your-remote-storage:backup/$bk_date/accounts/cpmove-$user.tar.gz && echo "[$(date "+%F %r")] cpmove-$user.tar.gz transported"
done
To be able to use something like your-remote-storage:, you must first configure with rclone. Read more here » rclone config

Just sharing as I promised!
 
Last edited:
  • Like
Reactions: cPanelLauren

JetAppsClark

Registered
Aug 16, 2017
3
2
3
California
cPanel Access Level
Root Administrator
I didn't mean it is totally jetbackup's fault but partially. Since it doesn't have S3 Compatible Destinations supported, I was forced to choose the one in the list which is far from my server. That's why transport was too slow and wasn't working out for me.
Hello Optimizer,

Thank you for trying out JetBackup! We regret to hear your issues in finding a suitable alternative with currently supported destinations in JetBackup. FYI the fastest backup destination will either be Local storage (provided that it's on a separate device) or a remote SSH destination(provided its high bandwidth and low latency).

With that said, I'd also like to add that we are close to releasing incremental backup support for any custom and more common s3-compatible storage destinations such as Wasabi, BackBlaze, and DO Spaces on the EDGE tier of JetBackup 5. Please stay tuned for the latest updates on our social media channels if you're interested in trying it out.

Best Regards,
JetApps Team
 
  • Like
Reactions: cPanelLauren