SSD vs Raided SATA

raskaliraskali Member Posts: 16 ■□□□□□□□□□
Revamping my home lab and need some advise. Currently running an "All in One" with workstation. Need advise on the best and affordable route to go for storage. Currently considering either doing a raided SATA 6Gbs disk or going with a single SSD route. My concerns with the SSD is that its still a single spindle thus i will still have some bottlenecks but will gain on greater IOPs as compared to going the raided route which will give me more spindles and more space but much lesser IOPs, though space is not of an essence at the moment. I don't plan on running more than 4 VM's at a time. Any ideas or recommendations will really be appreciated.

Bless

Comments

  • slinuxuzerslinuxuzer Member Posts: 665 ■■■■□□□□□□
    In a lab where your not running actual workloads I'd go with whatever gives you the most space, I just purchased a proliant ML 150 G6 as a lab server and am going to use 2 x 250 gb sata's. I considered going with some SSD, but the cost is so high, if sata can't do what I need I will go with SSD.

    I bought the server and a second processor from new egg, and 48 GB udimm 10600 for (700 for memory), total cost was 1,500.


    I'll try to post back when I get everything going, like a week from now, and let you know what performance is like.
  • jibbajabbajibbajabba Member Posts: 4,317 ■■■■■■■■□□
    Single SSD vs. Raided SATA is a bit of a weird comparison tbh.

    Raid vs. Non-Raid or SSD vs. SATA makes a bit more sense.

    But if you intend to run VMs, then I would not go for the SSD way as writes is what is killing SSDs, unless you want to buy "proper" SSDs which are so expensive that it isn't worth it unless you need the IOPs ..
    My own knowledge base made public: http://open902.com :p
  • EveryoneEveryone Member Posts: 1,661
    For my all-in-one I have my VM's on a single 1 TB 7200RPM SATA 3Gbps drive. Firewall and web server VMs run 24/7, and I'll start-up between 2 to 6 lab VMs on top of it at a time every now and then. Never noticed any disk performance issues.

    Storage is expensive right now no matter what way you go.
  • TackleTackle Member Posts: 534
    SSD's are $1+ per GB ($2+ per GB when you get up past 120GB drives), even with the price hike of HDD's, you're still paying quite a bit less per GB. ($.50 or less).

    Go with a RAID. No one ever complains about having too much storage. And you may be able to achieve r/w speeds comparable to a single SSD.
  • blargoeblargoe Member Posts: 4,174 ■■■■■■■■■□
    I agree with the above; unless I needed SSD for some other application at home I wouldn't bother considering it for labbing.
    IT guy since 12/00

    Recent: 11/2019 - RHCSA (RHEL 7); 2/2019 - Updated VCP to 6.5 (just a few days before VMware discontinued the re-cert policy...)
    Working on: RHCE/Ansible
    Future: Probably continued Red Hat Immersion, Possibly VCAP Design, or maybe a completely different path. Depends on job demands...
  • ptilsenptilsen Member Posts: 2,835 ■■■■■■■■■■
    raskali wrote: »
    SSD is that its still a single spindle
    It's not a spindle at all.

    In terms of raw throughput, enough RAIDed SATA disks will outperform any SSD. In terms of random access, the SSD will outperform almost any number of hard disks under the kind of load 4 VMs will put on it.

    The VMs will boot and shutdown quicker on the SSD, and you will notice less impact from performing multiple concurrent actions.

    But honestly, for labbing, you probably aren't doing a lot of concurrent actions. You probably aren't going to reboot more than one or two VMs at a time. A single SATA disk is sufficient. Discs in RAID1 or RAID10 are better, and SSD will probably have the best overall performance for such a small lab. It's really a question of how much you want to spend for something just to use as a lab.
    Working B.S., Computer Science
    Complete: 55/120 credits SPAN 201, LIT 100, ETHS 200, AP Lang, MATH 120, WRIT 231, ICS 140, MATH 215, ECON 202, ECON 201, ICS 141, MATH 210, LING 111, ICS 240
    In progress: CLEP US GOV,
    Next up: MATH 211, ECON 352, ICS 340
  • kriscamaro68kriscamaro68 Member Posts: 1,186 ■■■■■■■□□□
    Having used ssd's for some of my vm's in my home lab I would continue to use them. I had 3 wd blacks in raid 0 and they where great for vm's as well but it took 3 of them to equal 1 ssd. Yes I got more storage but for storage I just use a nas anyways. If you can afford it the ssd's are nice because they are quicker to respond and you can get 120gb for $130 which would allow for 4 vm's at about 27gb a piece if used in a windows environment. With the price of hdd's these days I would rather just get ssd's.

    Also Like ptilsen said there is no spindle for ssd's. Just think of it as a ton of usb flash drives in a raid0 and that is basically what an ssd is but your using sata connections instead of usb.
  • raskaliraskali Member Posts: 16 ■□□□□□□□□□
    I guess they both have their pros and cons and it comes down to each his own. Like some of you mentioned the price of storage has gone up and based on my milage space is not a concern. My main concern is bugging down performace when running concurrent tasks and hitting the disk hard. I will rather lean more towards performance than space. At the moment on newegg for around the same price range i could either go the SSD route or 2 raided Sata drive for aprx the same amt of GB space assuming i went the cheaper MLC SSD route. Thats my dialema. For those of you that have tried andtested or currently using both is there that much of a noticeable diffrence with either.

    @kriscamaro68 - looks like youve been there done that, is there much of a difference that is worth dishing out the extra $$$ for SSD's

    Bless
  • MentholMooseMentholMoose Member Posts: 1,525 ■■■■■■■■□□
    With only four VMs, the disk performance probably won't matter. Get whatever meets your requirements. For example, if you don't need a lot of disk space and want better performance, get an SSD. If you want to use the disk for labs as well as storing movies, backups, or other stuff, get a regular disk. While you can build a RAID array with regular disks that beats the sequential I/O performance of an SSD, you can't realistically come close to matching the random I/O of an SSD which is what really matters when running VMs. Just take a look at this comparison between the Agility 3 (a budget-oriented consumer SSD) and the VelociRaptor... for random I/O the comparison is a joke, the Agility is literally 50 to 100 times faster:
    AnandTech - Bench - SSD

    For labbing in general, when specifically comparing an SSD to a RAID array of regular disks, I would recommend the SSD unless you have a special use case (e.g. the lab VMs need particularly large disks for some reason) or want to use the array for something else. Cost-wise, a single SSD suitable for labbing can be the same or even cheaper than multiple regular disks and a RAID controller. Regular disks will give you more space, but for labbing purposes, you do not usually benefit from a lot of space. For example, while you could store hundreds of large virtual disks with two 2TB disks in RAID 0 or 1, in most cases using those disks you could run barely 10 VMs simultaneously (maybe a few more or a few less, depending on the VM workload, and your patience icon_smile.gif).

    I still use large RAID arrays along with large regular disks since they are great for storing large amounts static data, like backups, movies, ISOs, etc., but I put VMs on SSDs in most cases. Since switching to SSDs, I simply don't have to worry about disk performance anymore. Booting an entire lab environment (maybe a dozen or more VMs in some cases) at once is no problem, the VMs will still boot quickly. I can run AV software and do Windows Update however I want with no impact to whatever else I'm doing.

    The biggest benefit I've found with using SSDs for labbing, which I didn't expect, is that I actually lab much more than I used to. For example, if I'm reading something (in a book, blog, or forum post for that matter) and come across something I don't quite understand or just find interesting and want to try, I can lab it quickly and easily. Before, I would think twice about trying it since deploying or even just booting VMs could take a (relatively) long time. I've built a variety of sysprep'd templates and I can boot 10 of those at once if necessary and have a huge lab up and running in two minutes. I have significant time constraints these days and SSDs make a big difference for me. I will not be going back to regular disks for running VMs if at all possible.
    MentholMoose
    MCSA 2003, LFCS, LFCE (expired), VCP6-DCV
  • raskaliraskali Member Posts: 16 ■□□□□□□□□□
    Thanks MentholMoose you hit the nail right on the head for me. That's what i was thinking as well, but based on various comparison i read on other forums i was a bit up in the air and torn in between. You confirmed exactly what i was concerned about, i hate having to wait for vm's to load when reading on a topic and needing to lab it/visually test what iam reading at the time. I guess my mind is made, SSD it is. Thanks everyone for your insight.

    JahBless & 1Luv
  • SteveLordSteveLord Member Posts: 1,717
    SSD benchmarks plastered allover the web do not necessarily transfer into real world performance at all unless you're doing something that taps into it. Anyone that's used various brands extensively should know this. I recommend reading up on SSDs first (since you assumed they used spindles.....wow...) before doing anything.

    I've run VMs on both HDDs and SSDs and they are not worth the extra money out of your pocket just for labs. The money you save by not using them can be put toward memory, better CPU, books, tests, etc. Speaking of which, RAID controller? Another waste....so don't even factor that in with HDDs. The boot times on SSDs are quicker, but nothing dramatic like you're shaving years off your life by using them.
    WGU B.S.IT - 9/1/2015 >>> ???
  • MentholMooseMentholMoose Member Posts: 1,525 ■■■■■■■■□□
    SteveLord wrote: »
    SSD benchmarks plastered allover the web do not necessarily transfer into real world performance at all unless you're doing something that taps into it.
    Synthetic benchmarks do not always reflect real world performance, but in my experience VM workloads do benefit from the superior random I/O performance the benchmarks indicate. I don't particularly care about the time a single VM takes to boot (hence I did not mention the huge sequential I/O numbers in the benchmarks), rather I care about concurrent tasks, such as booting multiple VMs simultaneously (e.g. to rapidly get a lab up and running) and doing intensive tasks in multiple VMs, which depend on random I/O and where I see a dramatic difference.

    Even some specific tasks work much better with an SSD. For example, to learn App-V well you should sequence a variety of programs. In an interview or on the job, nobody will care that you can sequence calc.exe... that may suffice for the MCITP: VA, but employers will care about your experience with complex software suites that can be multiple gigabytes with many files. With a regular disk, it can take an hour to build a large package (versus minutes with an SSD) and take a heavy toll on the disk preventing you from doing much else with it (versus no collateral damage with an SSD), and you might just find that the package doesn't even work, forcing you to do it over (or more likely just give up and hope the interview doesn't go into too much depth), so an SSD is very beneficial when learning it.
    SteveLord wrote: »
    The money you save by not using them can be put toward memory, better CPU, books, tests, etc.
    Fair enough for books and tests, however I couldn't disagree more about spending more on RAM and CPU instead. What is the point of buying more RAM and faster CPUs when VMs will just be bottlenecked by the disk? I think that is the real waste of money. If you're using a regular disk for labs you might as well spend less on CPU and RAM since they won't do you much good after a certain point. Spending, say, $200 versus $150 on a CPU probably won't let you run even a single extra VM, whereas using that $50 to get an SSD instead of a regular disk can outright double VM density. RAM can possibly make more of a difference than CPU when it comes to VM density on a lab machine, but the 16GB DDR3 RAM kits going for about $50 nowadays are already enough for most lab machines.

    Anyway if standard disks meet your labbing needs by all means continue using them. I don't work for or have any financial interest in any SSD manufacturer so it doesn't matter to me what anyone buys. :D I've just found them quite useful and tried to explain why.
    MentholMoose
    MCSA 2003, LFCS, LFCE (expired), VCP6-DCV
  • SteveLordSteveLord Member Posts: 1,717
    Great points indeed. They are useful. But compared to RAIDing HDDs (at their normal prices especially), and just for labs, it really doesn't provide a substantial return on investment. I use Intel, Crucial and OCZ drives at work and play and love them all, but I eat that space up quick...VMs or not. The speed is nice, but I will certainly survive if I didn't have them. ;)
    WGU B.S.IT - 9/1/2015 >>> ???
  • NISMO1968NISMO1968 Member Posts: 12 ■□□□□□□□□□
    raskali wrote: »
    Revamping my home lab and need some advise. Currently running an "All in One" with workstation. Need advise on the best and affordable route to go for storage. Currently considering either doing a raided SATA 6Gbs disk or going with a single SSD route. My concerns with the SSD is that its still a single spindle thus i will still have some bottlenecks but will gain on greater IOPs as compared to going the raided route which will give me more spindles and more space but much lesser IOPs, though space is not of an essence at the moment. I don't plan on running more than 4 VM's at a time. Any ideas or recommendations will really be appreciated.

    Bless

    Don't calculate $/MB calculate $/IOP. Deciding on this basis SSD based solution is a definite winner. Just make sure you have a good backup plan...
Sign In or Register to comment.