The Community Forums

Interact with an entire community of cPanel & WHM users!
  1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

Wordpress / Prestashop / x Performance

Discussion in 'Workarounds and Optimization' started by TCB13, Feb 24, 2015.

  1. TCB13

    TCB13 Well-Known Member

    Joined:
    Jul 25, 2014
    Messages:
    58
    Likes Received:
    1
    Trophy Points:
    8
    cPanel Access Level:
    Root Administrator
    Hello,

    I've been trying to discover why websites I have running on wordpress are slow and unresponsive. The usual behavior is that the take like up to 5 seconds to start loading when using Wordpress.

    This performance issues also applies do website running Prestashop and other similar solutions. Custom made simple PHP scripts and plain HTML pages load fast.

    I tried clean installations of Wordpress on new cPanel accounts, all default and the problem persists and there's anything really wrong with my configuration probably.

    Here are the VPS Specs:
    Code:
    Total processors: 3
    
    Processor #1
    Vendor
    GenuineIntel
    Name
    QEMU Virtual CPU version (cpu64-rhel6)
    Speed
    2659.982 MHz
    Cache
    4096 KB
    
    Processor #2
    Vendor
    GenuineIntel
    Name
    QEMU Virtual CPU version (cpu64-rhel6)
    Speed
    2659.982 MHz
    Cache
    4096 KB
    
    Processor #3
    Vendor
    GenuineIntel
    Name
    QEMU Virtual CPU version (cpu64-rhel6)
    Speed
    2659.982 MHz
    Cache
    4096 KB
    At apache statistics I usually get stuff like:

    Code:
    35 requests currently being processed, 35 idle workers
    9 requests currently being processed, 59 idle workers
    In terms of RAM:

    Captura de ecrã 2015-02-24, às 18.45.38.png
    Captura de ecrã 2015-02-24, às 18.46.22.png

    As you can see even the server load is low.

    At Apache Global Configuration I've been tweaking values the the configuration that gave me the most speed, specially on website frequently visited is:

    Captura de ecrã 2015-02-24, às 18.49.14.png

    I'm also running the latest PHP from cPanel and MariaDB, with the following config:

    Code:
    [mysqld]
    	local-infile = 0
    	max_connections = 350
    	key_buffer = 150M
    	myisam_sort_buffer_size = 64M
    	join_buffer_size = 3M
    	read_buffer_size = 3M
    	sort_buffer_size = 5M
    	max_heap_table_size = 16M
    	table_cache = 5000
    	thread_cache_size = 286
    	interactive_timeout = 25
    	wait_timeout = 7000
    	connect_timeout = 15
    	max_allowed_packet = 150M
    	max_connect_errors = 10
    	query_cache_limit = 3M
    	query_cache_size = 150MB
    	query_cache_type = 1
    	tmp_table_size = 16M
    	
    	innodb_buffer_pool_size=1024M
    	key_buffer_size=300M
    But with all this the performance of the wordpress / prestashop installations is still low is there anything wrong with my configuration? Is there anything special somewhere else I need to take a closer look?

    Thank you ;)
     
  2. cPanelMichael

    cPanelMichael Forums Analyst
    Staff Member

    Joined:
    Apr 11, 2011
    Messages:
    30,814
    Likes Received:
    672
    Trophy Points:
    113
    cPanel Access Level:
    Root Administrator
  3. AdamDresch

    AdamDresch Well-Known Member

    Joined:
    Jun 22, 2006
    Messages:
    80
    Likes Received:
    0
    Trophy Points:
    6
    Same on my own cPanel server, the site gets very few visits, yet it takes upwards of 5 seconds for a page to appear
    Server is powerful, very low load and all the latest stuff, it's weird
    Have you tried an opcode cache? I need to check whether mine is using xcache.
    Could also try using PHP 5.5, as it comes with Zend Opcache.
     
  4. TCB13

    TCB13 Well-Known Member

    Joined:
    Jul 25, 2014
    Messages:
    58
    Likes Received:
    1
    Trophy Points:
    8
    cPanel Access Level:
    Root Administrator
    From my point of view, our problem is not related to PHP. I'm not using any king of caching right now.

    The thing is, if websites with high traffic load really fast and websites without traffic load slowly, the bottleneck must be the DB and not PHP.

    I say that because if PHP was the issue the website with high traffic would also be slow, actually much slower since I'm not running anything cached... (to avoid other issues).

    Side Note: Do you have enough available workers on your Apache to deal with the requests?
     
    #4 TCB13, Feb 25, 2015
    Last edited: Feb 25, 2015
  5. TCB13

    TCB13 Well-Known Member

    Joined:
    Jul 25, 2014
    Messages:
    58
    Likes Received:
    1
    Trophy Points:
    8
    cPanel Access Level:
    Root Administrator
    I got the following results with MySQL Tunner:

    Code:
     >>  MySQLTuner 1.4.0 - Major Hayden <major@mhtx.net>
     >>  Bug reports, feature requests, and downloads at [url=http://mysqltuner.com/]MySQLTuner-perl by major[/url]
     >>  Run with '--help' for additional options and output filtering
    [!!] Currently running unsupported MySQL version 10.0.16-MariaDB
    [OK] Operating on 64-bit architecture
    
    -------- Storage Engine Statistics -------------------------------------------
    [--] Status: +ARCHIVE +Aria +BLACKHOLE +CSV +FEDERATED +InnoDB +MRG_MyISAM 
    [--] Data in MyISAM tables: 594M (Tables: 890)
    [--] Data in InnoDB tables: 801M (Tables: 5849)
    [--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 52)
    [--] Data in MEMORY tables: 7M (Tables: 96)
    [!!] Total fragmented tables: 19
    
    -------- Performance Metrics -------------------------------------------------
    [--] Up for: 1d 0h 59m 36s (2M q [26.223 qps], 74K conn, TX: 29B, RX: 737M)
    [--] Reads / Writes: 91% / 9%
    [--] Total buffers: 1.5G global + 11.5M per thread (350 max threads)
    [!!] Maximum possible memory usage: 5.4G (94% of installed RAM)
    [OK] Slow queries: 0% (9/2M)
    [OK] Highest usage of available connections: 12% (45/350)
    [OK] Key buffer size / total MyISAM indexes: 300.0M/95.9M
    [OK] Key buffer hit rate: 97.3% (5M cached / 143K reads)
    [OK] Query cache efficiency: 41.4% (1M cached / 3M selects)
    [!!] Query cache prunes per day: 22101
    [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 118K sorts)
    [!!] Joins performed without indexes: 3454
    [OK] Temporary tables created on disk: 20% (29K on disk / 141K total)
    [OK] Thread cache hit rate: 99% (45 created / 74K connections)
    [!!] Table cache hit rate: 3% (5K open / 138K opened)
    [OK] Open file limit used: 1% (191/15K)
    [OK] Table locks acquired immediately: 99% (1M immediate / 1M locks)
    [OK] InnoDB buffer pool / data size: 1.0G/801.5M
    [OK] InnoDB log waits: 0
    -------- Recommendations -----------------------------------------------------
    General recommendations:
        Run OPTIMIZE TABLE to defragment tables for better performance
        Reduce your overall MySQL memory footprint for system stability
        Increasing the query_cache size over 128M may reduce performance
        Adjust your join queries to always utilize indexes
        Increase table_open_cache gradually to avoid file descriptor limits
        Read this before increasing table_open_cache over 64: [url=http://bit.ly/1mi7c4C]table_cache negative scalability - MySQL Performance Blog[/url]
    Variables to adjust:
      *** MySQL's maximum memory usage is dangerously high ***
      *** Add RAM before increasing MySQL buffer variables ***
        query_cache_size (> 150M) [see warning above]
        join_buffer_size (> 3.0M, or always use indexes with joins)
        table_open_cache (> 5000)
    Is query cache size that big? I made the following query to get a percentage of query cache usage:

    Code:
    SELECT ((( @@GLOBAL.query_cache_size - 
    (SELECT VARIABLE_VALUE 
    FROM information_schema.SESSION_STATUS 
    WHERE VARIABLE_NAME LIKE 'Qcache_free_memory')
    ) / @@GLOBAL.query_cache_size )*100) as query_cache_usage_percentage;
    And sometimes I get values like 70 to 80% used... So I guess it's not that big of a value.

    What do you thing?
     
    #5 TCB13, Feb 25, 2015
    Last edited by a moderator: Mar 1, 2015
  6. TCB13

    TCB13 Well-Known Member

    Joined:
    Jul 25, 2014
    Messages:
    58
    Likes Received:
    1
    Trophy Points:
    8
    cPanel Access Level:
    Root Administrator
    After reading the recommended posts and some more testing, I changed my.cnf values to:

    Code:
    [mysqld]
    	local-infile = 0
    	innodb_file_per_table = 1
    
    	tmp_table_size = 50M
    	max_heap_table_size = 50M
    
    	query_cache_type = 1
    	query_cache_limit=20M
    	query_cache_size=20M
    
    	innodb_buffer_pool_size = 100M
    	key_buffer_size = 300M
    
    	max_connections = 350
    	key_buffer = 500M
    	myisam_sort_buffer_size = 64M
    	join_buffer_size = 3M
    	read_buffer_size = 3M
    	sort_buffer_size = 5M
    	read_rnd_buffer_size = 4M
    
    	table_cache = 5000
    	thread_cache_size = 286
    
    	max_allowed_packet = 150M
    	max_connect_errors = 10
    
    	connect_timeout = 2
    	interactive_timeout = 25
    	wait_timeout = 7000
    	delayed_insert_timeout = 40
    
    	collation_server = utf8mb4_unicode_ci
    	character_set_server = utf8mb4
    
    	query_cache_strip_comments = 1
    	open_files_limit = 55000
    I'm not sure this configuration will help me. However I'll report in a few days. I didn't get any performance changes right away.
     
  7. TCB13

    TCB13 Well-Known Member

    Joined:
    Jul 25, 2014
    Messages:
    58
    Likes Received:
    1
    Trophy Points:
    8
    cPanel Access Level:
    Root Administrator
    I'm just replying to this for future reference for annoying looking into similar issues.

    From my tests my initial configuration and the latest one didn't make much difference. However I can say the query cache hit rate on my original configuration was better by 5-10% than in the other one.

    My last configuration was kinda made out of the other post, referenced above, about mariadb optimization. At this point I'm not entirely sure if I should keep it or revert to my original configuration. In any case, I'm keeping this settings form the new one:

    This makes the DBs more manageable and can potentially make websites using default SQL settings like Wordpress safer.

    I managed to find out that my biggest performance issue was actually the disk I/O, a thing that should've been working fine.

    Other thing that was impacting my server performance was using suphp instead of fastcgi as PHP handler. I changed this and noticed some performance changes right away.

    I also installed Zend OPcache and rebuilt apache with it and now everything is much faster... but cached in RAM. My OPCache installation procedure was:

    Please note:

    1. You should list the folder /usr/local/lib/php/extensions/ to find the correct path of your opcache.so:

    2. Apart form the phpinfo() function, you should install a Opcache management interface on some cpanel account and check if the start time of opcache is not always equal to the current request time. - If yes, it means the opcahce is being restarted at every request and not working properly. Try to change your php hander from suphp to fcgi, reboot apache and test again until it works fine!.

    3. The provided configuration is compatible with frameworks.

    Thank you. ;)
     
    #7 TCB13, Mar 1, 2015
    Last edited: Mar 1, 2015
Loading...

Share This Page