easyswiss

Active Member
PartnerNOC
Apr 19, 2011
44
1
58
Error scenario (reproducible on 4 servers under Cloudlinux 7.9):

Ram memory goes towards 100% in cPanel status (buffer is the source), leading to use of swap with no apparent cause and high server loads.
This problem occurs on several servers since the update to 98.0.5 after the night.

echo 3 > /proc/sys/vm/drop_caches

Solves the problem as a workaround.
 
Last edited by a moderator:

easyswiss

Active Member
PartnerNOC
Apr 19, 2011
44
1
58
Issue is identically with Ram Usage showing error

This is a software bug, could also be a cloudlinux bug.
Servers are new dell R240 servers (less than 1 year old, 32 memory, 25-30% used), we have 13 servers from the same type, 4 servers are affected after update and reboot.
 

cPRex

Jurassic Moderator
Staff member
Oct 19, 2014
14,358
2,248
363
cPanel Access Level
Root Administrator
Hey there! I checked a few CloudLinux machines that got the 98.0.5 update and I didn't see any odd memory behavior on them. Could you submit a ticket to our team so we can check one of the affected systems in real-time and see if we can find an issue? If you are able to submit that ticket, please post the number here so I can follow along and make sure this thread stays updated.
 

easyswiss

Active Member
PartnerNOC
Apr 19, 2011
44
1
58
Hey there! I checked a few CloudLinux machines that got the 98.0.5 update and I didn't see any odd memory behavior on them. Could you submit a ticket to our team so we can check one of the affected systems in real-time and see if we can find an issue? If you are able to submit that ticket, please post the number here so I can follow along and make sure this thread stays updated.
Will do that tomorrow. :)

Server was runing with load 2-4 -> memory was growing and growing during the night (only pagecache, dentries and inodes, not real memory) and then gone with serverload 100+... reboot -> same issue again.

Most systems were never rebooted during the last 6 months, could also be a Cloudlinux 7.9 kernel issue.
We have now a batch that runs every 60min with the workaround avove, everything is stable.
 
  • Like
Reactions: cPRex