RAID 10 for 4 SSD's, complete overkill?

DeathmageDeathmage Banned Posts: 2,496
Hey guys,

So I'm wanting to upgrade my PC at home which has a solo Samsung 840 Evo 250GB SSD in it and I'd like to purchase three more of them and make a RAID 10, curious if anyone here has even done a RAID 10 for super-fast SSD's like the Evo's and I'm pondering if I will hit a back-edge bottleneck cliff with the SATA III.

I recently upgrade my Mobo, an ASUS ROG Rampage LGA 2011, and CPU along with the memory, my CPU as-well as my memory have water blocks on them, 16 GB dimms per slot for 64 GB's total and a OC'd Intel i7 @ 4.6 Ghz cooled with a Corsair water cooling system.

My two Titan's are still good cards so no need to upgrade them yet, but as mentioned I really want to dable into RAID 10 for redudnancy and speed but on SSD's it seems that they may be so fast I might have a bottleneck on the Mobo.

Comments

  • broli720broli720 Member Posts: 394 ■■■■□□□□□□
    Just curious why you need a setup like that at all.
  • Mike7Mike7 Member Posts: 1,107 ■■■■□□□□□□
    TRIM support may be an issue with SSD RAID, so your SSD may get slower over time.
    I am only aware of TRIM support for RAID 0. Not sure though.
  • DeathmageDeathmage Banned Posts: 2,496
    broli720 wrote: »
    Just curious why you need a setup like that at all.

    Everyone has hobbies. I haven't built a new rig in 5 year. My last rig was a i7-2600k with 3 6970's in crossfire. It's lasted a while, this is a refresh.

    But I also want redundancy.
  • OctalDumpOctalDump Member Posts: 1,722
    Admin magazine recently had a whole article on SSD and RAID. It looks like RAID 10 performance is similar to a single SSD in terms of latency, which is interesting in itself.
    They also recommend if you are using hardware RAID, that you disable read ahead and write cache, and ensure that the hardware RAID is good for SSDs.
    Based on the testing they did, they recommend RAID-5 for read heavy loads, RAID-1 also improves read performance about 40%, RAID-10 for more balanced read/write loads.

    The only way you will know for sure is to try it with what you have. There are too many variables, both in hardware, and in the workloads, to make good recommendations otherwise.
    2017 Goals - Something Cisco, Something Linux, Agile PM
  • Nafe92014Nafe92014 Member Posts: 279 ■■■□□□□□□□
    Lol, just came across this as my instructor walked by. I asked him and he laughed, "Because he can".
    Certification Goals 2020: CCNA, Security+

    "You have enemies? Good, that means you've stood up for something, sometime in your life." ~Winston S. Churchill
  • AhriakinAhriakin Member Posts: 1,799 ■■■■■■■■□□
    Because of the interface you'll max out at ~1GBs (benchmarked), just short of 2 drives combined. Real world as mentioned above you're looking at maybe a 30-40% speed improvement for average access. Games etc. won't see huge advantage over a single drive due to the other tasks they get bogged down with besides pure IO (even on level loading), maybe 10-20%. I have 2 different arrays in the same machine. RAID0 for the OS/Games (I image the OS', backup criticals elsewhere, so not worried about redundancy). RAID5 for my audio software and tools since they literally take days to reinstall. Sequential write on the R5 array is ~half that of the R0, write access time is about 8x slower, sequential read throughput and access time is close to parity.

    My aim at some point is to be able to load the Future :)
    We responded to the Year 2000 issue with "Y2K" solutions...isn't this the kind of thinking that got us into trouble in the first place?
  • DeathmageDeathmage Banned Posts: 2,496
    Basically RAID 10 is the best of both worlds.

    RAID 0 would be fast but no redundancy.
    RAID 1 would be high of read, slow on writes but will have redundancy.
    RAID 5 would be high on reads but with split partity the writes take a hit, has a 1 drive failure.

    RAID 1+0 or 10 would mirror two drives for the redundancy, then it would still them both across two striped RAID 0 drives for the write performance. If I has 8 SATA III ports I'd do a 6 drive RAID 10 with two mirrors and one stripe but I require two SATA III for a RAID 1 WD RED 4 TB bulk storage array.
  • jibbajabbajibbajabba Member Posts: 4,317 ■■■■■■■■□□
    Actually my Synology Slim has 4x480GB SSDs in Raid 10 ... Simply because it is my main VMware storage and 1TB is good enough as it is, may as well go with more oompf :)
    My own knowledge base made public: http://open902.com :p
  • tpatt100tpatt100 Member Posts: 2,991 ■■■■■■■■■□
    Just seems like it's adding additional points of failure for troubleshooting for gains that won't be noticed.
  • DeathmageDeathmage Banned Posts: 2,496
    Honestly I want a RAID 10 for World of Warcraft, it's now pushing a 67 GB installation, never want to re-download or install it again, plus I hate those load screens icon_razz.gif

    I wonder how many people will be like ...wha..da..F*ck icon_wink.gif
  • techfiendtechfiend Member Posts: 1,481 ■■■■□□□□□□
    It's not a bad setup but might be bad timing with M.2 about to explode.
    2018 AWS Solutions Architect - Associate (Apr) 2017 VCAP6-DCV Deploy (Oct) 2016 Storage+ (Jan)
    2015 Start WGU (Feb) Net+ (Feb) Sec+ (Mar) Project+ (Apr) Other WGU (Jun) CCENT (Jul) CCNA (Aug) CCNA Security (Aug) MCP 2012 (Sep) MCSA 2012 (Oct) Linux+ (Nov) Capstone/BS (Nov) VCP6-DCV (Dec) ITILF (Dec)
Sign In or Register to comment.