The Community Forums

Interact with an entire community of cPanel & WHM users!
  1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

High exim disk utilization

Discussion in 'General Discussion' started by fasdush, May 26, 2008.

  1. fasdush

    fasdush Member

    Joined:
    Oct 29, 2005
    Messages:
    13
    Likes Received:
    0
    Trophy Points:
    1
    We tried to populate our new server (SRCSAS18e Raid 5 with 7 SAS drives, write cache enabled, battery ok) with ~300 user accounts and we get stuck with high load average caused by disk i/o:
    ------------------------------------
    # iostat -dx 2 2
    Linux 2.6.18-8.1.15.el5 () 05/26/2008

    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 23.81 373.10 77.67 73.48 3673.89 3572.25 47.94 14.38 95.11 5.24 79.13

    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 170.15 41.29 91.54 461.69 2145.27 19.63 7.17 54.63 7.49 99.55
    ------------------------------------

    I found that stoping exim brings disk utilization back to 10-20%:
    ------------------------------------
    # iostat -dx 2
    Linux 2.6.18-8.1.15.el5 () 05/26/2008

    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 23.75 372.91 77.61 73.70 3660.00 3572.47 47.80 14.39 95.09 5.24 79.27

    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 8.50 46.00 26.00 77.00 704.00 984.00 16.39 9.95 96.62 4.45 45.85

    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 6.00 0.00 23.50 0.00 300.00 0.00 12.77 0.11 4.70 3.81 8.95
    ------------------------------------

    Our queue is always less then 200 messages, spool/exim folder is ~370mb.

    I`ve increased read-ahead cache for drive:
    # blockdev --getra /dev/sda
    2048

    noatime /var/spool was enabled too:
    # mount|grep sda
    /dev/sda2 on / type ext3 (rw,noatime,usrquota)
    /dev/sda1 on /boot type ext3 (rw)
    /dev/sda5 on /home type ext3 (rw,noatime,usrquota)
    /dev/sda6 on /var type ext3 (rw,noatime,usrquota)
    /dev/sda7 on /var/lib/mysql type ext3 (rw,noatime,usrquota)
    /dev/sda8 on /home2 type ext3 (rw,noatime,usrquota)
    /dev/sda9 on /backup type ext3 (rw,noatime)

    I`ve also tried to disable spamassasin, clamd, system filter for attachments extensions, quota check on smtp time and various logging options -- with no decrase in disk i/o.

    There was no custom modifications to exim.conf except upping connections limit.

    We have few other cPanel based hosts (with different i/o subsystem though), and disk utilization always less then 20 on those machines.

    Why Exim is so disk-hungry? Could anyone help with this issue? Thanks.
     
  2. fasdush

    fasdush Member

    Joined:
    Oct 29, 2005
    Messages:
    13
    Likes Received:
    0
    Trophy Points:
    1
    I found that most of read/writes are going to /var/exim/spool/db, where DBs for callout/ratelimit/domain keys caching resides (it`s Berkeley DB4, I believe). Since this are simply caches and only requirements for them is to be consistent (since this is DBs), I moved them on tmpfs:
    ----------------------------
    cp -ax /var/spool/exim/db /var/spool/exim/db.tmp
    mount -t tmpfs -o size=128m tmpfs /var/spool/exim/db
    mv /var/spool/exim/db.tmp/* /var/spool/exim/db/
    rm -rf /var/spool/exim/db.tmp/
    ----------------------------
    Now all read/writes are going to memory and I get consistent state after the crash (because tmpfs was mounted above working set of DBs).

    This reduces disk load a lot (from 80-100% to 20-40%).
     
  3. ahmed.awaad

    ahmed.awaad Member

    Joined:
    Jul 2, 2008
    Messages:
    5
    Likes Received:
    0
    Trophy Points:
    1
    dude...that really helped me big time....but that should be added to script that should run on startup...because exim now is reading and writing to the ram which will be deleted on reboot
     
Loading...

Share This Page