The Community Forums

Interact with an entire community of cPanel & WHM users!
  1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

48 Gig of Ram and SSD Launch on CentOS 6.x

Discussion in 'Workarounds and Optimization' started by gunmuse, Nov 28, 2012.

  1. gunmuse

    gunmuse Well-Known Member

    Joined:
    Jul 3, 2003
    Messages:
    98
    Likes Received:
    0
    Trophy Points:
    6
    Location:
    New Mexico
    I am waiting a pair of Dual Xeons that seem to be lost in the mail but I thought I would start this post and get some feedback(debate probably) going on upscalling a 2009 server to be a cost effective high speed server.

    What I got.
    IBM x3650
    2 X5450 Xeon Quad Cores with 12mb cache
    48 gig of Fully buffered ram (Not sure the benefit of fully buffered ram)
    6 120 Gig SSD's Less than $100 each with Cool cases 3.5 to 2.5 Convertors
    CentOs 6.x
    Cpanel

    This has been a fairly cost effective build. These servers are being traded out of LARGE farms and business's so you can pick up one of these 5000 machines for a few hundred dollars stripped.
    Then all the MAX the servers capabilities features I did for just under $2k So the goal is to keep it running and limit SSD replacement to 2 a year. 1 SSD is = to 8 10k RPM scsi's in speed so I am going to Raid1+0 these drives but in 3 pairs instead of 2 (on old scsi machine I striped across 3 drives for speed in the raid array) So my reduncacy should be 3 arrays of 240 gig each. instead of 2 at 360 gig as I don't believe I would see any speed benefit from 3 SSD's striped as the raid card technology isn't really up to SSD speed's. I started small so that when have to swap out drive I can hotswap bigger without messing up the arrays and add GiG's as cost effectiveness of SSD's continues to improve.

    In the past it seems I setup CentOS wrong according to Cpanel support, Now I have found somewhere a how-to setup CentOS and I will have to relocate that maybe we could post that in this how-to.

    What I think needs to be different.
    SWAP .... I have murdered many a SSD because swap is a nightmare on a server. So I have overkilled the memory. We currently hardly use 12 gig of memory. But this isn't a rotary drive system Caching and swap have benefits but they should be fairly limited and all that Cache should be MEMORY ONLY. So how we do that during setup is important. I remember in 2007 we tried to use SSD's when they first became available to servers and if you tried to turn off Swap Redhat would break it wasn't really capable.
    I know I use the Swapoff -A command in SSH to force all the swap stuff back into memory and then turn it on if we get a Google crawl out of control.
    Here is a snapshot of Awstats on a site with 1 page COUNT THEM 1.
    GoogleGoneWild.jpg

    Sharefeed.com is the site and our "Search" is was launched in 2003 Google went public in 2005 we have been harassed by them ever since THUS our passion for fast page load speed because we do have sites on the server with actual pages that Google itself will open all million virtual pages ever week if it has time. That's why a hiccup or error in the php or apache dumping errors because of a header response can load 60 gig's of logs in just a few days.

    HUGE Mysql databases. So during setup the default /var always seems to cause a problem. and it seems with so many virtual sites these(wp forums etc.) Mysql should reside somewhere it can grow as it IS the website itself these days and not the /home page loads. So How do we put Mysql somewhere like the /home diretory and not have Cpanel break when it does updates?(Hint this should already be a CPANEL OPTION)

    LOGGING. This is a nightmare of epic porportion and its a constant write situation.
    What is missing from the industry is "Real time" or "buffered logs" I have just thrown tons of memory at this thing while logs that write to the drive WHY it locked up are GREAT. Informational PHP error's and folder errors could be stored in memory and moved hourly, daily whatever. This would prevent hardware abuse. So it would seem my only option is to turn logs to CRITICAL or OFF to keep the permanent Development/troubleshooting feature called logging from causing the failures we would have to troubleshoot.
    Also lets state that we have Cpanel updates that started error_log going nuts and filled 68 gig of drives in just 48 hours on my SSD that would be a full server death to fill them like that.

    But mainly while I have 2gig databases our servers tend to handle connections and that is my main concern. I want to start as many Apache servers as possible and serve 3000 mysql connections if I can. My philosophy has been I would rather have an IDLE APACHE SERVER than a spanwing one. So as part of my configuration I want to maximize the number of visitors our pages are light but there are millions of them our old dual cores on a raid 1+0 scsi with 2 dual cores and 16 gig of ram effectively deleveries 200k + per hour with the server load under 2 most times.

    So setting up the Kernel is important and a mystery to most of us. I copy and paste suggestions as I only do kernel adjustments once half a decade.

    RAMDRIVE. As part of the setup a 5gig ram drive would be nice for Session ID's , ON BOOT STORAGE of flat file DB's (my IP to Lat/long database is 50mb .bin file and I load it to ram so that search it is as fast as possible. Session ID's in ram have been a huge savings on even my SCSI drive thrashing. But its always been a wierd tinkering and the How-to's I read in 2005 were written in 1998 so am assuming there just may be a better way of dealing with the avoidance of this write.

    As a point to my business to get the most of any computer I AVOID any conversation with the hard drives if possible its been my experience we can 2-4 times the expected throughput with this approach. reads are fast writes put everything on hold. To that point.

    SSD's should be able to read and write damn near the same time. Shouldn't with 3 arrays one array write it down and the others "catchup" after they finish their reads or does the write have to be buffered to all 3 arrays at the same time. It would seem that we would want Say ARRAY 1 to have write priority and say Array 2 and 3 Read priority. So if a write comes in 1 jumps that write to the front of the line while 2 and 3 finish the Mysql or page requests and the write is "qued". So If a crash happens all arrays would sync any changed data from array 1. Don't even know if this possible maybe some raid engineer will see this and have a cool idea for his boss on monday:)

    So for safety since my arrays are only 2 drives if Array 1 had a drive failure Array 2 becomes write priority until Array 1 is repaired.

    So my ultimate goal with this thread is to build a How-To for a CentOS 6.x box hosting platform using SSD's and not have the webhost lose money by providing speed. I understand the type of client would need to be picky but having one box in a fleet of serers that you put good customers who just server pages on so they can get better search ranks because of your tremedious page response speed surely is something that a niche of clients would pay extra for.
     
    #1 gunmuse, Nov 28, 2012
    Last edited: Nov 28, 2012
  2. electric

    electric Well-Known Member

    Joined:
    Nov 5, 2001
    Messages:
    697
    Likes Received:
    1
    Trophy Points:
    18
    I am looking forward to seeing some responses to this. We are moving towards an "all SSD" server config, since the cost is pretty much the same as a spinning SAS array with ssd cache. Our latest server build has 4 x 256gb SSD in a RAID 10 config.

    I'm not a technical person, so I can't provide any thoughts on the issues and questions you mentioned.

    I do know that you can put /var anywhere, though. I've thought that maybe an idea might be to put multiple RAID configs into the server that use different quality drives. So RAID10 config1 might have the cheaper drives which are used for /home folders. Then RAID10 config2 with better quality drives would be used for /var (mysql) and logs which are constantly writing...

    But then... you would need a better RAID card, and possibly a bigger server chassis.. which means more $$, so why not just get better SSD cards to begin with?
     
  3. gunmuse

    gunmuse Well-Known Member

    Joined:
    Jul 3, 2003
    Messages:
    98
    Likes Received:
    0
    Trophy Points:
    6
    Location:
    New Mexico
    I finally have the hardware together on this box. Tons of little things when you take a box to max performance and thus why you don't see it a lot.

    Putting in Dual Core Xeons means you have to add a power modulator may of been common knowledge in 2009 and if I had read the instruction manual for the server.

    Also you can't go max ram until you put both CPU's in the server but I am at a reported 49gig of ram. Fully Buffered 4gig sticks can run up to $150 per so be warned about up front cost of a setup like this. Its best to look for a DDR3 Machine that cost a little more than to put max DDR2 in a server.

    I used 3.5 to 2.5 ICY Docks to put my 6 SSD's in the machine I put 120 Gig SSD's in there both cost effective and leaves me room to correct the raid Array upwards in case of failure while in production.

    I intend on making 3 Arrays of 2 drives each versus 2 arrays of 3 drives in a Raid 1+0 Since we have speed in a single drive the increased fail over is the appeal.
     
Loading...

Share This Page