Tamer Fahmy

Member
Jan 6, 2017
6
2
53
Cairo, Egypt
cPanel Access Level
DataCenter Provider
Hello,

We need to find a good solution for the Load happen when server make the backup.

It looks like that it is need to make a limitation for Disk I/O speed for the backup process.

We are using Centos with CLOUDLINUX 7.5 and cPanel v72.0.10, the most problem is happen due to server run the backup process
we always found a load with command
/usr/local/cpanel/3rdparty/bin/pigz -4 --processes 1 --blocksize 4096 --rsyncable

even we have reduce the compress level from 6 ( default ) to 4 ( more faster )

and we have configured the Tweak settings Extra CPUs for server load from 0 to 4 ( We have 8 Core for this server )

I wish we can find a final solution for this

Regards
 
  • Like
Reactions: cesarlopes

cPanelLauren

Product Owner II
Staff member
Nov 14, 2017
13,266
1,301
363
Houston

Tamer Fahmy

Member
Jan 6, 2017
6
2
53
Cairo, Egypt
cPanel Access Level
DataCenter Provider
Hello Lauren ,

I want to thank you for your reply,
but the concept for this discussion is to find a solution for this problem not skipping files from backup.

I know skipping files will make it more faster, but it still make the same problem

We need something to make limitation for some root processes like pigz command

Also, Why when any client make a backup, it become running under root process, i think it have to run in the client limitation under LVE settings

Regards
 

cPanelLauren

Product Owner II
Staff member
Nov 14, 2017
13,266
1,301
363
Houston
We need something to make limitation for some root processes like pigz command
The following thread may be helpful but from what it sounds like you may have already tried something like this pigz / pkgacct high cpu
High server load

Also, Why when any client make a backup, it become running under root process, i think it have to run in the client limitation under LVE settings
I have the following when I generate a backup through my account:
Code:
 1350 root      39  19   26584   9128    632 R  16.9  0.2   0:00.51 /usr/local/cpanel/3rdparty/bin/pigz -6 --processes 1 --blocksize 4096 --rsyncable
Code:
 1340 root      39  19  130300  31160   7724 S   4.7  0.8   0:00.14 pkgacct - myuser - av: 4 - write compressed stream
Code:
 1349 myuser    39  19  133140  26264   2800 S   1.3  0.7   0:00.04 pkgacct - myuser - av: 4 - create tar stream
The pkgacct process to create the tar is run as the user but the pigz and compression related processes are run as the root user. Only items necessary to be run as the user are.

Thank you,
 

Tamer Fahmy

Member
Jan 6, 2017
6
2
53
Cairo, Egypt
cPanel Access Level
DataCenter Provider
Hello,

I have checked all this settings again,

The new one i'm going to try is the Tweak Settings to set I/O limitation

I have set
I/O priority level at which nightly backups are run (Minimum: 0; Maximum: 7) to be ( 4 )

and
I/O priority level at which cPanel-generated backups are run (Minimum: 0; Maximum: 7) to be ( 4 )

and we will check for the working servers while making backup to check the load again


What i need to ask about the option of
Max cPanel process memory (Minimum: 768)

dose it will effect if we give the cPanel process more Memory, it will be low processors or not !?

Regards
 

cPanelLauren

Product Owner II
Staff member
Nov 14, 2017
13,266
1,301
363
Houston
What i need to ask about the option of
Max cPanel process memory (Minimum: 768)

dose it will effect if we give the cPanel process more Memory, it will be low processors or not !?
Would it be possible for you to clarify this? I don't understand what you mean. As far as the setting it's the maximum memory a cPanel process can use before it is killed off. This setting’s minimum value depends on the number of cPanel accounts on the system.

So if a cPanel specific process exceeds the max memory it will be killed
 

sparek-3

Well-Known Member
Aug 10, 2002
2,152
267
388
cPanel Access Level
Root Administrator
The nature of a backup process is just going to create a load.

It takes disk i/o to read all of the files that you are backing up.

It takes disk i/o to write files (copying) that you are backing up.

There's really no way around this.

You might be able to minimize the amount of disk i/o that a backup process uses, but this is going to result in longer backup times.

If you have a 30GB file to copy, if the disk i/o for that operation is running at 50MB/s, then it's going to take 10 minutes (600 seconds) to copy that file (this assumes 50MB/s throughout the entire 600 seconds, which in the real world doesn't really happen). If you limit the disk i/o to 5MB/s, that same 30GB file is going to take 100 minutes (6000 seconds).

If the total max disk i/o your server can operate at is 60MB/s, then if you are copying it at 50MB/s then that means you have 10MB/s to use elsewhere for other tasks on the server. If you are throttling it down to 5MB/s, then you have 55MB/s to use elsewhere.
 
  • Like
Reactions: cPanelLauren