The Community Forums

Interact with an entire community of cPanel & WHM users!
  1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

Migrating from VPS to Physical Server - question about RAID and Volumes

Discussion in 'General Discussion' started by stardotstar, Sep 4, 2012.

  1. stardotstar

    stardotstar Well-Known Member

    Joined:
    Sep 14, 2009
    Messages:
    68
    Likes Received:
    0
    Trophy Points:
    6
    Hi gurus,

    I have been running a VPS with about 90G of storage - my current usage is around 60G and the backup temps and space for expansion of home directories and new clients is very limited now. To the point when I have run out of space at times prepping the backups at 3-4am...

    I am suffering degraded performance compared to my last physical instance of LAMP/cPanel (Centos6 x64)

    So, I have the following hardware to use on my ProLiant DL360G5 (6 2.5" Bays)

    6 x 72G 10K SAS disks
    4 x 700G SATA disks
    2 x 90G SSD disks

    Now my thinking is that I need to make every attempt to maximise performance with a manageable redundancy/risk mitigation. So there is a space, performance and redundancy considerations.

    As the machine can take 6 disks I used a Raid calculator to determine what I think sounds like a good spread:

    Bay 1 : 72G SAS
    Bay 2 : 72G SAS
    Bay 3 : 72G SAS
    Bay 4 : 72G SAS

    Bay 5 : 700G SATA
    Bay 6 : 700G SATA

    The first four 72G SAS drives configured into a Raid 1+0 to provide a single disk failure redundancy as well as a total available space of 144G - an increase of about 50% on current / as well as the performance of SAS 10K drives in a Raid 10 (not best possible but best practical from what I can see)

    The second pair of 700G SATA are in a mirror pair - Raid 1 - and therefore yield single disk failure redundancy and a large spare backup volume to put the nightly backups on - then doing an offsite move independently of the cPanel processes according to a schedule.

    By putting the server on the 144G RAID 10 I would get the best performance possible with *some* redundancy as well as leaving me with 2 x 72G SAS spare drives at the DC as hotswap drives and 2 x 700G SATA spare drives.

    The use of the 90G SSDs is problematic because although they are likely to be faster they could only be implemented in JBOD or RAID 0 and therefore would have not robust risk mitigation - as they also fail faster I am not confident that without say at least 4 of them a 3 spare that I can be a good solution.

    What do you guys think?

    Part of the mix is to ensure that the databases are all residing on the lowest I/O disks and therefore keeping the main / on the 72G 10K SAS disks probably gives me a better performance than any other.

    I could go with 6 x 72 10K and have no spares and Raid 10 doesn't benefit in an array that large.
    Or I could go with 4 x 700 SATA and have the lower speed but 1.4T of storage for the main system BUT put the databases volume on a mirror pair of 72G SAS - BUT that would not be as beneficial performance wise as Raid 0 and therefore the best implementation of SAS higher speed drives is RAID 10

    Is my thinking straight on my limited options here guys?

    Best regards,
    Will
     
  2. stardotstar

    stardotstar Well-Known Member

    Joined:
    Sep 14, 2009
    Messages:
    68
    Likes Received:
    0
    Trophy Points:
    6
    The problem has changed slightly.

    I have acquired 4 x 90G Kingston SSDs and am now thinking of putting / on a mirrored pair (need at least 2 spare to safely use SSD I guess) of 90G SSDs then filling the other 4 bays with the 72G SAS drives in RAID 5 yielding 216G for /home

    This would put the whole OS including /var/lib/mysql on SSD which in a mirror pair will still be much higher performance and /home on the SAS leveraging both redundancy, space against speed (ie Raid 5 vs Raid 10 for that volume - Raid 10 providing only 144G)

    Will
     
  3. cPanelTristan

    cPanelTristan Quality Assurance Analyst
    Staff Member

    Joined:
    Oct 2, 2010
    Messages:
    7,623
    Likes Received:
    21
    Trophy Points:
    38
    Location:
    somewhere over the rainbow
    cPanel Access Level:
    Root Administrator
    Hello Will,

    You may want to test out a few configurations before moving to the machine to see the performance to be sure. There are Apache and MySQL tests you can perform on sample or test data. Apache's tool is ab and MySQL's is called mysqlslap

    I did a forum thread on using Apache ab not that long ago:

    http://forums.cpanel.net/f402/using-apache-ab-benchmarking-gnuplot-graphing-275542.html

    I'd actually suggest setting up a remote MySQL server from the main one if you have the equipment to do so. This way you can configure that remote server for the best options and take it out of the equation for setting up the webserver itself.

    Thanks!
     
  4. stardotstar

    stardotstar Well-Known Member

    Joined:
    Sep 14, 2009
    Messages:
    68
    Likes Received:
    0
    Trophy Points:
    6
    Thank you Tristan. I wish I could host a remote MySQL server but I'm a pretty lean and small operation and my decisions are really around a one size fits all LAMP server with a bias to db intensive hosting hence my attraction to the SSDs. Of course the question of performance optimisation's answer is always "it depends" but based on my experience the SSD RAID 1 and SAS RAID 5 is looking sound to me - just wondering if I have missed anything glaring or not considered something - your suggestions are great and I am going to try and get the mysql bench going now - I couldn't use SysBench because my Perl is not compiled with threads or something.

    Best regards
    WIll
     
Loading...

Share This Page