Hello,
We face a very strange issue and need your help on this one. We tried contacting CloudLinux before because there seems to be a MySQL memory leak, however they told us "As for huge MySQL memory allocation, we are not very experienced in troubleshooting these kinds of issues" (CloudLinux ticket 70971). I thought that since a CL license was bought the OS would be supported too, please let me know if I am wrong and if you can shed a light on this issue.
We have CL7 with MySQL Governor installed, in order not to have any abusive sites using up all MySQL. Last Friday (04/Oct) the server used up all the memory + swap file and froze. I rebooted the server, however I see through munin that after a few hours the memory ended again and slowly the swap file started to get used again. Please see prnt.sc/pf6m0z .
FACT 1:
After communication with CL we were advised to remove the swap file which we did (the server has 32GB of RAM) and set the following:
...however MySQL again uses more than 20GB of RAM each time and we manually restart it to prevent from using up all memory. One time we didn't restart it (last Sunday 23:30) and it used up all memory, making the server unresponsive.
FACT 2:
After each mysql restart we performed, in .err file the following gets recorded:
FACT 3:
This is a new server. The problems started after 10 days of migrating sites from Plesk. It had around 170 accounts on it without an issue, until last Thursday when it started using the swap partition ( prnt.sc/pgbugn ) . Does this munin graph looks good? I'm trying to find out if something we migrated has an issue or if the issue existed since day 1.
FACT 4:
MySQL .err entries mentioned above started on September 23rd, which is actually the first time MySQL was restarted in the lifespan of this OS. The OS was set up on September 19th. Also we keep restarting MySQL every 5-6 hours in order not to use up all the server memory.
FACT 5:
MySQLTuner reports:
... so there seems to be nothing wrong with our my.cnf file, which is as follows:
Do you have any advice which might help?
We face a very strange issue and need your help on this one. We tried contacting CloudLinux before because there seems to be a MySQL memory leak, however they told us "As for huge MySQL memory allocation, we are not very experienced in troubleshooting these kinds of issues" (CloudLinux ticket 70971). I thought that since a CL license was bought the OS would be supported too, please let me know if I am wrong and if you can shed a light on this issue.
We have CL7 with MySQL Governor installed, in order not to have any abusive sites using up all MySQL. Last Friday (04/Oct) the server used up all the memory + swap file and froze. I rebooted the server, however I see through munin that after a few hours the memory ended again and slowly the swap file started to get used again. Please see prnt.sc/pf6m0z .
FACT 1:
After communication with CL we were advised to remove the swap file which we did (the server has 32GB of RAM) and set the following:
Code:
innodb_buffer_pool_size = 5G
max_allowed_packet = 128M
FACT 2:
After each mysql restart we performed, in .err file the following gets recorded:
Code:
2019-10-05T00:39:28.422650+02:00 0 [Note] InnoDB: FTS optimize thread exiting.
2019-10-05T00:39:28.422801+02:00 0 [Note] InnoDB: Starting shutdown...
2019-10-05T00:39:28.522999+02:00 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool
2019-10-05T00:39:28.523949+02:00 0 [Note] InnoDB: Buffer pool(s) dump completed at 191005 0:39:28
2019-10-05T00:39:29.023601+02:00 0 [ERROR] [FATAL] InnoDB: Page [page id: space=109603, page number=488] still fixed or dirty
2019-10-05 00:39:29 0x7f04eb259780 InnoDB: Assertion failure in thread 139659101706112 in file ut0ut.cc line 910
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to http://bugs.mysql.com.
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB: http://dev.mysql.com/doc/refman/5.7/en/forcing-innodb-recovery.html
InnoDB: about forcing recovery.
21:39:29 UTC - mysqld got signal 6 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
Attempting to collect some information that could help diagnose the problem.
As this is a crash and something is definitely wrong, the information
collection process might fail.
key_buffer_size=8388608
read_buffer_size=131072
max_used_connections=28
max_threads=151
thread_count=0
connection_count=0
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 68201 K bytes of memory
Hope that's ok; if not, decrease some variables in the equation.
Thread pointer: 0x0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0 thread_stack 0x40000
/usr/sbin/mysqld(my_print_stacktrace+0x3b)[0xf0930b]
/usr/sbin/mysqld(handle_fatal_signal+0x461)[0x7bacb1]
/lib64/libpthread.so.0(+0xf5f0)[0x7f04eae3f5f0]
/lib64/libc.so.6(gsignal+0x37)[0x7f04e9828337]
/lib64/libc.so.6(abort+0x148)[0x7f04e9829a28]
/usr/sbin/mysqld[0x78abb8]
/usr/sbin/mysqld(_ZN2ib5fatalD1Ev+0xfd)[0x11940fd]
/usr/sbin/mysqld[0x11d4018]
/usr/sbin/mysqld(_Z13buf_all_freedv+0x4c)[0x11d408c]
/usr/sbin/mysqld(_Z37logs_empty_and_mark_files_at_shutdownv+0x184f)[0x105a26f]
/usr/sbin/mysqld(_Z27innobase_shutdown_for_mysqlv+0x8df)[0x113ad8f]
/usr/sbin/mysqld[0xfedb65]
/usr/sbin/mysqld(_Z22ha_finalize_handlertonP13st_plugin_int+0x2c)[0x80710c]
/usr/sbin/mysqld[0xcf5d77]
/usr/sbin/mysqld(_Z15plugin_shutdownv+0x209)[0xcf8e69]
/usr/sbin/mysqld[0x7afe18]
/usr/sbin/mysqld(_Z11mysqld_mainiPPc+0x20f2)[0x7b6692]
/lib64/libc.so.6(__libc_start_main+0xf5)[0x7f04e9814505]
/usr/sbin/mysqld[0x7aa5e3]
The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains
information that should help you find out what is causing the crash.
2019-10-04T21:39:30.686484Z 0 [Warning] Could not increase number of max_open_files to more than 50000 (request: 65697)
2019-10-04T21:39:30.686722Z 0 [Warning] Changed limits: table_open_cache: 24919 (requested 32768)
2019-10-04T21:39:30.884907Z 0 [Note] libgovernor.so found
2019-10-04T21:39:30.884943Z 0 [Note] All governors functions found too
2019-10-04T21:39:30.885005Z 0 [Note] Governor connected
2019-10-04T21:39:30.885012Z 0 [Note] All governors lve functions found too
2019-10-05T00:39:30.885718+02:00 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
2019-10-05T00:39:30.885734+02:00 0 [Warning] 'NO_AUTO_CREATE_USER' sql mode was not set.
2019-10-05T00:39:30.887664+02:00 0 [Note] /usr/sbin/mysqld (mysqld 5.7.27-cll-lve) starting as process 699636 ...
This is a new server. The problems started after 10 days of migrating sites from Plesk. It had around 170 accounts on it without an issue, until last Thursday when it started using the swap partition ( prnt.sc/pgbugn ) . Does this munin graph looks good? I'm trying to find out if something we migrated has an issue or if the issue existed since day 1.
FACT 4:
MySQL .err entries mentioned above started on September 23rd, which is actually the first time MySQL was restarted in the lifespan of this OS. The OS was set up on September 19th. Also we keep restarting MySQL every 5-6 hours in order not to use up all the server memory.
FACT 5:
MySQLTuner reports:
Code:
[OK] Maximum reached memory usage: 5.6G (17.87% of installed RAM)
[OK] Maximum possible memory usage: 5.8G (18.63% of installed RAM)
Code:
[mysqld]
performance-schema = 0
datadir = /var/lib/mysql
socket = /var/lib/mysql/mysql.sock
symbolic-links = 0
log-error = /var/lib/mysql/host.eshoped.gr.err
pid-file = /var/run/mysqld/mysqld.pid
innodb_buffer_pool_size = 5G
max_allowed_packet=268435456
open_files_limit=50000
default-storage-engine = MyISAM
innodb_file_per_table = 1
sql_mode="NO_ENGINE_SUBSTITUTION"
log_timestamps = SYSTEM
max_user_connections=50
#tmpdir=/tmpsql
query_cache_size=512M
query_cache_type=1
query_cache_limit=512M
join_buffer_size=1M
table_open_cache=32K
performance_schema=1
Last edited: