Server lost cache and the load go to sky

konrath

Well-Known Member
May 3, 2005
366
1
166
Brasil
Hello

Server lost cache and the load go to sky. After lost cache the disk I/O play the load go to sky


All server

REDHAT Enterprise 6.4 x86_64 standard – server
KERNEL > Linux server.xxxxxxxxx.net 2.6.32-358.23.2.el6.x86_64

Code:
SERVER 1  ( 32GB )
12:00:01 PM kbmemfree kbmemused  %memused kbbuffers  kbcached  kbcommit   %commit
05:30:01 PM   4516768  28333124     86.25   2332332  21902528   7062192     20.21
05:40:01 PM   4313680  28536212     86.87   2335264  22028984   7268820     20.80
05:50:01 PM  21753232  11096660     33.78   2338424   5812952   8052144     23.04
06:00:01 PM  21312116  11537776     35.12   2341196   6323584   6950512     19.89
06:10:01 PM  20238140  12611752     38.39   2343016   7331076   6832204     19.55
06:20:01 PM  19823496  13026396     39.65   2345868   7726864   9101400     26.04


SERVER 2  ( 32GB )
12:00:01 PM kbmemfree kbmemused  %memused kbbuffers  kbcached  kbcommit   %commit
04:30:02 PM   4148000  28701904     87.37   2808952  20962768   4521800     12.21
04:40:01 PM  21964552  10885352     33.14   2812704   4760076   4961664     13.39
04:50:01 PM  21185372  11664532     35.51   2817488   5548784   4720156     12.74
05:00:01 PM  20440752  12409152     37.78   2821324   6265004   5328260     14.38
05:10:01 PM  19470132  13379772     40.73   2828276   7177404   5035748     13.59
05:20:01 PM  19226536  13623368     41.47   2832680   7489068   4977176     13.44
05:30:01 PM  18499048  14350856     43.69   2836296   7833612   5106296     13.78
05:40:01 PM  18563768  14286136     43.49   2857020   7989900   5529140     14.93

SERVER 3 ( 48GB )
12:00:01 PM kbmemfree kbmemused  %memused kbbuffers  kbcached  kbcommit   %commit
02:30:01 PM  16622948  32793592     66.36   6414028  16784892  21197560     39.54
02:40:02 PM  35176108  14240432     28.82   6416860   2742012  31413780     58.60
02:50:01 PM  36063272  13353268     27.02   6418764   3888680  24428044     45.57
03:00:01 PM  35160516  14256024     28.85   6420848   4747400  17860624     33.32
03:10:01 PM  33932532  15484008     31.33   6425076   5955636  14108644     26.32
03:20:01 PM  33232596  16183944     32.75   6430800   6502032  19551184     36.47
03:30:01 PM  32684080  16732460     33.86   6434440   7055520  14585196     27.21
03:40:01 PM  32142140  17274400     34.96   6435952   7463248  15705120     29.29
Load after lost cache ( from 1 server )
Code:
Wed Nov 13 14:28:01 BRST 2013 - 2.63
Wed Nov 13 14:30:01 BRST 2013 - 3.94
Wed Nov 13 14:32:01 BRST 2013 - 3.15
Wed Nov 13 14:34:01 BRST 2013 - 3.24
Wed Nov 13 14:36:01 BRST 2013 - 4.50
Wed Nov 13 14:38:05 BRST 2013 - 20.35
Wed Nov 13 14:40:01 BRST 2013 - 112.26
Wed Nov 13 14:40:01 BRST 2013 - httpd stopped
Wed Nov 13 14:41:00 BRST 2013 - after stop load=56.76
Wed Nov 13 14:41:22 BRST 2013 - after stop load=37.47
Wed Nov 13 14:42:01 BRST 2013 - httpd not running, exiting.
Wed Nov 13 14:41:43 BRST 2013 - after stop load=27.27
Wed Nov 13 14:42:05 BRST 2013 - after stop load=20.04
Wed Nov 13 14:42:26 BRST 2013 - after stop load=14.66
Wed Nov 13 14:42:48 BRST 2013 - after stop load=9.94
Wed Nov 13 14:43:11 BRST 2013 - mysql,exim,cpanel,courier restarted
Wed Nov 13 14:43:14 BRST 2013 - httpd restarted
Wed Nov 13 14:44:01 BRST 2013 - 17.02
Wed Nov 13 14:46:01 BRST 2013 - 17.59
Wed Nov 13 14:48:01 BRST 2013 - 12.35
Code:
[email protected] [/etc]# cat /proc/meminfo
MemTotal:       49416540 kB
MemFree:        24789816 kB
Buffers:         6515972 kB
Cached:         13896852 kB
SwapCached:            0 kB
Active:         13643540 kB
Inactive:        9350352 kB
Active(anon):    2583760 kB
Inactive(anon):     5856 kB
Active(file):   11059780 kB
Inactive(file):  9344496 kB
Unevictable:           0 kB
Mlocked:               0 kB
SwapTotal:       4194296 kB
SwapFree:        4194296 kB
Dirty:              4080 kB
Writeback:             8 kB
AnonPages:       2580228 kB
Mapped:            38924 kB
Shmem:              8568 kB
Slab:            1139932 kB
SReclaimable:    1034776 kB
SUnreclaim:       105156 kB
KernelStack:        6088 kB
PageTables:        65144 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:    28902564 kB
Committed_AS:   15629652 kB
VmallocTotal:   34359738367 kB
VmallocUsed:      437232 kB
VmallocChunk:   34333216076 kB
HardwareCorrupted:     0 kB
AnonHugePages:   1601536 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
DirectMap4k:        5632 kB
DirectMap2M:     2082816 kB
DirectMap1G:    48234496 kB
[email protected] [/etc]#
Any suggestion ?
Thank you
Marcelo Konrath
 

konrath

Well-Known Member
May 3, 2005
366
1
166
Brasil
Hello

The REDHAT informs that there is a bug fixed for the OOM.

https://access.redhat.com/site/docu..._Linux/6/html/6.4_Technical_Notes/kernel.html

BZ#987261
Due to a bug in the NFS code, kernel size-192 and size-256 slab caches could leak memory. This could eventually result
in an OOM issue when the most of available memory was used by the respective slab cache. A patch has been applied to
fix this problem and the respective attributes in the NFS code are now freed properly.

I'm believing that the bug is not 100% solved.

So I disabled the OOM Killer to test.


sysctl vm.overcommit_memory=2
echo "vm.overcommit_memory=2" >> /etc/sysctl.conf


Someone is having problems with loss of cache?

Thank you
Konrath
 

cPanelMichael

Administrator
Staff member
Apr 11, 2011
47,880
2,261
463
Hello :)

Did the issue continue to appear after disabling OOM killer process? Note that you may want to post this thread on Red Hat or CentOS forums as you will likely get more input on this type of issue there.

Thank you.
 

konrath

Well-Known Member
May 3, 2005
366
1
166
Brasil
Hello :)

Did the issue continue to appear after disabling OOM killer process? Note that you may want to post this thread on Red Hat or CentOS forums as you will likely get more input on this type of issue there.

Thank you.
Hello Michael. Thank you. I will post this thread on Red Hat or CentOS too.

Yes. I put it vm.overcommit_memory=2 in sysctl.conf
After sysctl -p

This unfortunately did not solve the problem.

I have a server with RedHat 5 and never lost cache.
REDHAT Enterprise 5.10 x86_64 standard – server

In other 2 servers with RedHat 6 after kernel update, the cache has not lost yet. Perhaps miss soon. I do not know.
They have 64GB. All servers have the updated kernel.

In all servers with 32 or 48GB RAM and RedHat 6 (kernel updated), are missing the cache.

-------------------------------------------------------------------
When your server is overloaded, please check with sar-r and see if loss of cache

sar-r
-------------------------------------------------------------------

Anyone else having the same problem?

Thank you
Marcelo konrath
 

konrath

Well-Known Member
May 3, 2005
366
1
166
Brasil
Hello

the probelm is fixed.

I believe that 90% of people looking for this forum, it is due to the problem of cache flush.

I finally was able to control the overload due cache flush.

Your server is overloaded? After the peak, see if the cache was cleaned with

sar -r

Thank you
Marcelo Konrath
 
Last edited: