The Community Forums

Interact with an entire community of cPanel & WHM users!
  1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

Unexpected poor mysql, sudden locks, mysql 5.6.x for InnoDB FTS possible?

Discussion in 'Workarounds and Optimization' started by actived, Apr 4, 2012.

  1. actived

    actived Well-Known Member

    Mar 30, 2012
    Likes Received:
    Trophy Points:
    cPanel Access Level:
    Website Owner


    We have VPS (CENTOS 5.8 x86_64 hyper-v) with 2-core Intel(R) Xeon(R) CPU E5620 @ 2.40GHz and initial 2GB RAM which it seems is configured to dynamically expand to 4GB (Hyper-V has the feature, we are told).

    This VPS account is migrated (same provider) newly and since then we have mysql performance issues.
    The database locks up every 20-30 minutes and the tables and processes stay locked for upto 600-900 seconds or more. The only way to get anything working is to restart mysql which we are now doing every 20 minutes to have a functioning site at all.
    This means there could be bad data in some tables, but we clearly cant afford to just shut down the site temporarily!

    I've applied all possible tweaks and rolled back to no avail - the settings which were working perfectly on the previous server - mysql would run for over a week without need for a single restart or even any flush commands.

    No code changes are done as far we know.
    Apache runs fine, PHP runs fine

    The locking happens suddenly, I've studied the logs, no one single query is found throughout. There is one single LOCK TABLE query and it is sometimes the cause, but many times it isnt in the waiting/locked processlist.

    $ mysqladmin flush-tables doesnt help.
    $ mysqladmin flush-threads doesnt help.
    Often even UNLOCK TABLES from mysql> doesnt help.

    Instead of manually entering those commands all day, I have resorted to setting a Flush Tables in the event scheduler which was initially running at 30 minutes, but now only helps at *2* minutes. I know this sounds quite ridiculous.

    The problem is in the code of course, we have 2-3 tables with 100K+ rows and 1GB, 500MB, etc sizes.
    And those tables use MyISAM for FTS.
    Changing the code on a large scale might take weeks as the code is rewritten by earlier devs for FTS queries.
    So we are quite badly stuck.

    Here are the changes made in the variables (in an sql init-file, so they apply everytime):
    This assumes, as we are told, the RAM is 4GB:

    set global key_buffer_size=536870912;
    set global table_open_cache=131072;
    set global tmp_table_size=134217728;
    set global max_heap_table_size=134217728;
    set global thread_cache_size=8192;
    set global query_cache_size=1073741824;
    set global query_cache_limit=134217728;
    set global query_cache_min_res_unit=131072;
    This worked perfectly before our internal server move.

    Here is the output of
    $ perl
     >>  MySQLTuner 1.2.0 - Major Hayden <>
     >>  Bug reports, feature requests, and downloads at
     >>  Run with '--help' for additional options and output filtering
    -------- General Statistics --------------------------------------------------
    [--] Skipped version check for MySQLTuner script
    [OK] Currently running supported MySQL version 5.1.61-log
    [OK] Operating on 64-bit architecture
    -------- Storage Engine Statistics -------------------------------------------
    [--] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster
    [--] Data in MyISAM tables: 2G (Tables: 508)
    [--] Data in InnoDB tables: 144K (Tables: 9)
    [!!] Total fragmented tables: 23
    -------- Security Recommendations  -------------------------------------------
    [OK] All database users have passwords assigned
    -------- Performance Metrics -------------------------------------------------
    [--] Up for: 13m 3s (131K q [167.752 qps], 674 conn, TX: 59M, RX: 11M)
    [--] Reads / Writes: 56% / 44%
    [--] Total buffers: 1.6G global + 2.7M per thread (500 max threads)
    [!!] Maximum possible memory usage: 3.0G (151% of installed RAM)
    [OK] Slow queries: 0% (0/131K)
    [OK] Highest usage of available connections: 0% (4/500)
    [OK] Key buffer size / total MyISAM indexes: 512.0M/617.5M
    [OK] Key buffer hit rate: 98.8% (597K cached / 7K reads)
    [OK] Query cache efficiency: 30.9% (26K cached / 85K selects)
    [OK] Query cache prunes per day: 0
    [OK] Sorts requiring temporary tables: 0% (5 temp sorts / 804 sorts)
    [OK] Temporary tables created on disk: 0% (0 on disk / 29 total)
    [OK] Thread cache hit rate: 99% (4 created / 674 connections)
    [!!] Table cache hit rate: 14% (46 open / 327 opened)
    [OK] Open file limit used: 3% (88/2K)
    [OK] Table locks acquired immediately: 99% (103K immediate / 103K locks)
    [OK] InnoDB data size / buffer pool: 144.0K/8.0M
    -------- Recommendations -----------------------------------------------------
    General recommendations:
        Run OPTIMIZE TABLE to defragment tables for better performance
        MySQL started within last 24 hours - recommendations may be inaccurate
        Reduce your overall MySQL memory footprint for system stability
        Increase table_cache gradually to avoid file descriptor limits
    Variables to adjust:
      *** MySQL's maximum memory usage is dangerously high ***
      *** Add RAM before increasing MySQL buffer variables ***
        table_cache (> 131072)
    This is clearly not reliable as it is only 5-10 minutes old at best (mysql restarts needed)

    Now we are thinking what if we change to Mysql 5.6.x which has InnoDB FTS so that we can switch engines, get the benefit of row-level locking and also have to make minor or no code changes (they are great in number) to the FTS-tuned queries.

    I looked into setting "concurrent_insert" to 2 (currently it is 1) but I dont think it wise in our case.

    Would adding a separate physical disk for Mysql /tmp help?

    The /tmp entries are as follows:
    $ df -h
    /usr/tmpDSK           2.0G   71M  1.8G   4% /tmp
    In /etc/fstab:
    /dev/VolGroup00/LogVol00 /                       ext3    defaults,usrquota        1 1
    LABEL=/boot             /boot                   ext3    defaults        1 2
    tmpfs                   /dev/shm                tmpfs   defaults        0 0
    devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
    sysfs                   /sys                    sysfs   defaults        0 0
    proc                    /proc                   proc    defaults        0 0
    /dev/VolGroup00/LogVol01 swap                    swap    defaults        0 0
    /usr/tmpDSK             /tmp                    ext3    defaults,noauto        0 0
    #/usr/tmpDSK             /tmp                    ext3   loop,noexec,nosuid,rw  0 0
    /tmp             /var/tmp                    ext3    defaults,bind,noauto        0 0
    I had resized /usr/tmpDSK to 16GB but then rolled that back to 2GB.

    It seems to be about bad code, but it worked smoothly for a long time before the server move (without a single flush issued over a week).

    Is it possible to run MySQL 5.6.x with Cpanel/WHM 11.30.6 (build 7)
    Any help is *hugely* appreciated.
    Thanks in advance,
  2. actived

    actived Well-Known Member

    Mar 30, 2012
    Likes Received:
    Trophy Points:
    cPanel Access Level:
    Website Owner
    [solved] Re:Unexpected poor mysql, sudden locks, mysql 5.6.x for InnoDB FTS

    Well, I changed the size of the /usr/tmpDSK file to 8GB not using /scripts/securetmp but using this tutorial: how to increase the size of disk space /tmp (/usr/tmpDSK) partition in linux server «

    Earlier I had used a 16GB /usr/tmpDSK but with /scripts/securetmp and the default mount - as per the entry in /etc/fstab ("defaults, noauto"). But that performed poorly.

    However, mounting with "loop,noexec,nosuid,rw" works well even with 8GB
    So I now suspect the type of mount for /tmp -> /usr/tmpDSK is important.

    Another critical issue is that cPanel seems to update itself daily by default, which is a good thing security-wise but a bad thing performance-wise for VPS. Changing the setting from "Daily Automatic Updates" to the manual option dramatically changed the situation.

    Glad that the issue has finally been resolved for now although not as cleanly as I wished.

    Also, it seems MySQl 5.6.4 being very recent isnt considered stable and common enough for use in cPanel. I eagerly await the release which includes it - because it has FTS with InnoDB!
    #2 actived, Apr 8, 2012
    Last edited: Apr 8, 2012
  3. cPanelTristan

    cPanelTristan Quality Assurance Analyst
    Staff Member

    Oct 2, 2010
    Likes Received:
    Trophy Points:
    somewhere over the rainbow
    cPanel Access Level:
    Root Administrator
    Re: Unexpected poor mysql, sudden locks, mysql 5.6.x for InnoDB FTS possibl

    When you see an issue you believe might be due to /tmp size for MySQL, check the error logs for this:

    grep -i "incorrect key" /var/lib/mysql/*.err
    If you see those entires, /tmp is too small. You could always change the tmpdir for MySQL to a different location rather than resizing /tmp itself. Searching this forum for tmpdir would yield several discussions on changing MySQL's tmpdir path.

    Glad you got it sorted.


Share This Page