I'm having a problem that I can't figure out, and I'm wondering if it's cPanel related? If not, maybe you guys will have an idea of how to narrow it down.
Yesterday from around 5:30am until 7am, I had a huge increase in Apache processes that was causing my server to freeze up. I normally don't have more than 50 or so processes during my peak time, but this period was hitting the Server Limit that I had set in Apache configuration of 100.
By time I saw it, though, it had ended.
Then at around 4pm, it started again. This time I was there to see it, but couldn't find any reason for it. I checked the number of connections using:
but didn't see anything unexpected. I rebooted Apache, then MySQL, then the entire server, but none of them had any impact.
I was able to stop the server from freezing up by increasing Server Limit in Apache configuration to 256, but that's just a Band-aid. My number of Apache processes has stayed between 100 and 150 all night and all day, even when netstat showed that I only had 4 or 5 connections.
It's also notable that "Individual Interrupts" and "Disk Latency" in Munin went crazy at the same time.
I'm not sure what "Individual Interrupts" means, but an orange graph that's usually near 1e+02 dropped down below 1e-04.
And under "Disk Latency", /dev/xvdb/ has a green graph that's usually at around 1e-02 that dropped down to 1e-04. That made me suspect hardware failure, but I messaged Softlayer (who has the worst service now) and they said that with it being a virtual server then I wouldn't see hardware errors like that.
So I'm not sure if the change in Interrupts and Latency is relevant, or just a symptom of another problem.
I'm running CentOS 6.10 xen hvm, and WHM is v 76.0.20. I'm still running EasyApache 3, so WHM/cPanel hasn't updated to 78.
Any suggestions you guys can give would be greatly appreciated!! Thanks in advance!
Yesterday from around 5:30am until 7am, I had a huge increase in Apache processes that was causing my server to freeze up. I normally don't have more than 50 or so processes during my peak time, but this period was hitting the Server Limit that I had set in Apache configuration of 100.
By time I saw it, though, it had ended.
Then at around 4pm, it started again. This time I was there to see it, but couldn't find any reason for it. I checked the number of connections using:
Code:
netstat -plan | grep :80 | awk '{print $5}' | cut -d : -f 1 | sort | uniq -c | sort -nr | head
I was able to stop the server from freezing up by increasing Server Limit in Apache configuration to 256, but that's just a Band-aid. My number of Apache processes has stayed between 100 and 150 all night and all day, even when netstat showed that I only had 4 or 5 connections.
It's also notable that "Individual Interrupts" and "Disk Latency" in Munin went crazy at the same time.
I'm not sure what "Individual Interrupts" means, but an orange graph that's usually near 1e+02 dropped down below 1e-04.
And under "Disk Latency", /dev/xvdb/ has a green graph that's usually at around 1e-02 that dropped down to 1e-04. That made me suspect hardware failure, but I messaged Softlayer (who has the worst service now) and they said that with it being a virtual server then I wouldn't see hardware errors like that.
So I'm not sure if the change in Interrupts and Latency is relevant, or just a symptom of another problem.
I'm running CentOS 6.10 xen hvm, and WHM is v 76.0.20. I'm still running EasyApache 3, so WHM/cPanel hasn't updated to 78.
Any suggestions you guys can give would be greatly appreciated!! Thanks in advance!