Hi,
Situation:
We have VPS (CENTOS 5.8 x86_64 hyper-v) with 2-core Intel(R) Xeon(R) CPU E5620 @ 2.40GHz and initial 2GB RAM which it seems is configured to dynamically expand to 4GB (Hyper-V has the feature, we are told).
This VPS account is migrated (same provider) newly and since then we have mysql performance issues.
The database locks up every 20-30 minutes and the tables and processes stay locked for upto 600-900 seconds or more. The only way to get anything working is to restart mysql which we are now doing every 20 minutes to have a functioning site at all.
This means there could be bad data in some tables, but we clearly cant afford to just shut down the site temporarily!
I've applied all possible tweaks and rolled back to no avail - the settings which were working perfectly on the previous server - mysql would run for over a week without need for a single restart or even any flush commands.
No code changes are done as far we know.
Apache runs fine, PHP runs fine
Symptoms:
The locking happens suddenly, I've studied the logs, no one single query is found throughout. There is one single LOCK TABLE query and it is sometimes the cause, but many times it isnt in the waiting/locked processlist.
$ mysqladmin flush-tables doesnt help.
$ mysqladmin flush-threads doesnt help.
Often even UNLOCK TABLES from mysql> doesnt help.
Instead of manually entering those commands all day, I have resorted to setting a Flush Tables in the event scheduler which was initially running at 30 minutes, but now only helps at *2* minutes. I know this sounds quite ridiculous.
The problem is in the code of course, we have 2-3 tables with 100K+ rows and 1GB, 500MB, etc sizes.
And those tables use MyISAM for FTS.
Changing the code on a large scale might take weeks as the code is rewritten by earlier devs for FTS queries.
So we are quite badly stuck.
Here are the changes made in the variables (in an sql init-file, so they apply everytime):
This assumes, as we are told, the RAM is 4GB:
This worked perfectly before our internal server move.
Here is the output of mysqltuner.pl:
This is clearly not reliable as it is only 5-10 minutes old at best (mysql restarts needed)
Now we are thinking what if we change to Mysql 5.6.x which has InnoDB FTS so that we can switch engines, get the benefit of row-level locking and also have to make minor or no code changes (they are great in number) to the FTS-tuned queries.
I looked into setting "concurrent_insert" to 2 (currently it is 1) but I dont think it wise in our case.
Would adding a separate physical disk for Mysql /tmp help?
The /tmp entries are as follows:
In /etc/fstab:
I had resized /usr/tmpDSK to 16GB but then rolled that back to 2GB.
It seems to be about bad code, but it worked smoothly for a long time before the server move (without a single flush issued over a week).
---------------------------------
Is it possible to run MySQL 5.6.x with Cpanel/WHM 11.30.6 (build 7)
Any help is *hugely* appreciated.
Thanks in advance,
Dave.
Situation:
We have VPS (CENTOS 5.8 x86_64 hyper-v) with 2-core Intel(R) Xeon(R) CPU E5620 @ 2.40GHz and initial 2GB RAM which it seems is configured to dynamically expand to 4GB (Hyper-V has the feature, we are told).
This VPS account is migrated (same provider) newly and since then we have mysql performance issues.
The database locks up every 20-30 minutes and the tables and processes stay locked for upto 600-900 seconds or more. The only way to get anything working is to restart mysql which we are now doing every 20 minutes to have a functioning site at all.
This means there could be bad data in some tables, but we clearly cant afford to just shut down the site temporarily!
I've applied all possible tweaks and rolled back to no avail - the settings which were working perfectly on the previous server - mysql would run for over a week without need for a single restart or even any flush commands.
No code changes are done as far we know.
Apache runs fine, PHP runs fine
Symptoms:
The locking happens suddenly, I've studied the logs, no one single query is found throughout. There is one single LOCK TABLE query and it is sometimes the cause, but many times it isnt in the waiting/locked processlist.
$ mysqladmin flush-tables doesnt help.
$ mysqladmin flush-threads doesnt help.
Often even UNLOCK TABLES from mysql> doesnt help.
Instead of manually entering those commands all day, I have resorted to setting a Flush Tables in the event scheduler which was initially running at 30 minutes, but now only helps at *2* minutes. I know this sounds quite ridiculous.
The problem is in the code of course, we have 2-3 tables with 100K+ rows and 1GB, 500MB, etc sizes.
And those tables use MyISAM for FTS.
Changing the code on a large scale might take weeks as the code is rewritten by earlier devs for FTS queries.
So we are quite badly stuck.
Here are the changes made in the variables (in an sql init-file, so they apply everytime):
This assumes, as we are told, the RAM is 4GB:
Code:
set global key_buffer_size=536870912;
set global table_open_cache=131072;
set global tmp_table_size=134217728;
set global max_heap_table_size=134217728;
set global thread_cache_size=8192;
set global query_cache_size=1073741824;
set global query_cache_limit=134217728;
set global query_cache_min_res_unit=131072;
Here is the output of mysqltuner.pl:
Code:
$ perl mysqltuner.pl
>> MySQLTuner 1.2.0 - Major Hayden <[email protected]>
>> Bug reports, feature requests, and downloads at http://mysqltuner.com/
>> Run with '--help' for additional options and output filtering
-------- General Statistics --------------------------------------------------
[--] Skipped version check for MySQLTuner script
[OK] Currently running supported MySQL version 5.1.61-log
[OK] Operating on 64-bit architecture
-------- Storage Engine Statistics -------------------------------------------
[--] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster
[--] Data in MyISAM tables: 2G (Tables: 508)
[--] Data in InnoDB tables: 144K (Tables: 9)
[!!] Total fragmented tables: 23
-------- Security Recommendations -------------------------------------------
[OK] All database users have passwords assigned
-------- Performance Metrics -------------------------------------------------
[--] Up for: 13m 3s (131K q [167.752 qps], 674 conn, TX: 59M, RX: 11M)
[--] Reads / Writes: 56% / 44%
[--] Total buffers: 1.6G global + 2.7M per thread (500 max threads)
[!!] Maximum possible memory usage: 3.0G (151% of installed RAM)
[OK] Slow queries: 0% (0/131K)
[OK] Highest usage of available connections: 0% (4/500)
[OK] Key buffer size / total MyISAM indexes: 512.0M/617.5M
[OK] Key buffer hit rate: 98.8% (597K cached / 7K reads)
[OK] Query cache efficiency: 30.9% (26K cached / 85K selects)
[OK] Query cache prunes per day: 0
[OK] Sorts requiring temporary tables: 0% (5 temp sorts / 804 sorts)
[OK] Temporary tables created on disk: 0% (0 on disk / 29 total)
[OK] Thread cache hit rate: 99% (4 created / 674 connections)
[!!] Table cache hit rate: 14% (46 open / 327 opened)
[OK] Open file limit used: 3% (88/2K)
[OK] Table locks acquired immediately: 99% (103K immediate / 103K locks)
[OK] InnoDB data size / buffer pool: 144.0K/8.0M
-------- Recommendations -----------------------------------------------------
General recommendations:
Run OPTIMIZE TABLE to defragment tables for better performance
MySQL started within last 24 hours - recommendations may be inaccurate
Reduce your overall MySQL memory footprint for system stability
Increase table_cache gradually to avoid file descriptor limits
Variables to adjust:
*** MySQL's maximum memory usage is dangerously high ***
*** Add RAM before increasing MySQL buffer variables ***
table_cache (> 131072)
Now we are thinking what if we change to Mysql 5.6.x which has InnoDB FTS so that we can switch engines, get the benefit of row-level locking and also have to make minor or no code changes (they are great in number) to the FTS-tuned queries.
I looked into setting "concurrent_insert" to 2 (currently it is 1) but I dont think it wise in our case.
Would adding a separate physical disk for Mysql /tmp help?
The /tmp entries are as follows:
Code:
$ df -h
/usr/tmpDSK 2.0G 71M 1.8G 4% /tmp
Code:
/dev/VolGroup00/LogVol00 / ext3 defaults,usrquota 1 1
LABEL=/boot /boot ext3 defaults 1 2
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
/dev/VolGroup00/LogVol01 swap swap defaults 0 0
/usr/tmpDSK /tmp ext3 defaults,noauto 0 0
#/usr/tmpDSK /tmp ext3 loop,noexec,nosuid,rw 0 0
/tmp /var/tmp ext3 defaults,bind,noauto 0 0
It seems to be about bad code, but it worked smoothly for a long time before the server move (without a single flush issued over a week).
---------------------------------
Is it possible to run MySQL 5.6.x with Cpanel/WHM 11.30.6 (build 7)
Any help is *hugely* appreciated.
Thanks in advance,
Dave.