Anyone here use or trialed PureStorage/XtremIO/SolidFire?

EssendonEssendon Member Posts: 4,546 ■■■■■■■■■■
I've had the good fortune to be able to work on and migrate (this is a work in progress) a large number of users to XtremIO's all flash arrays. Loved the simple interface (though I loathed the initial Java issues with it), ease of administration and top support from VCE for the product. But man the gear's expensive, cost the company a shipload of cash.

Checking out PureStorage for another part of the company, cheaper, simpler interface and just as easy to manage. Top product, we had it delivered and setup in under a week. Great support too.

They're all good and do the same thing if you ask me, it's the price and inter-operability that usually tips the scales. For example, VCE would never allow a Pure AFA be attached to a Vblock. With VCE, it'll have to be an XtremIO Xbrick only.

Anyone else use(d) or trial(ed) these products or SolidFire or any other all flash arrays out there?
NSX, NSX, more NSX..

Blog >> http://virtual10.com
«1

Comments

  • DeathmageDeathmage Banned Posts: 2,496
    Looks interesting!
  • dave330idave330i Member Posts: 2,091 ■■■■■■■■■■
    All SSD seems like a waste to me. If you need the IOPS, server side cache would be cheaper.
    2018 Certification Goals: Maybe VMware Sales Cert
    "Simplify, then add lightness" -Colin Chapman
  • EssendonEssendon Member Posts: 4,546 ■■■■■■■■■■
    Not really, with the price of flash coming closer to that of spinning disk, things are more affordable than one may think. Server side cache is okay as long as your workloads arent large, we have over 6000 desktops, flash is beginning to ease things for us (agreed that there are things that should've been done better initially). I totally realize there are blades these days with more than 1TB RAM, but when you dont have that kind of gear, flash is a viable option.
    NSX, NSX, more NSX..

    Blog >> http://virtual10.com
  • EssendonEssendon Member Posts: 4,546 ■■■■■■■■■■
    Oh and you cant go past PernixData's FVP software if you are flash-averse, great piece of code. For peeps low on cash or with blades/rackmounts with no SSD, it's great.
    NSX, NSX, more NSX..

    Blog >> http://virtual10.com
  • kj0kj0 Member Posts: 767
    Essendon wrote: »
    Checking out PureStorage for another part of the company, cheaper, simpler interface and just as easy to manage. Top product, we had it delivered and setup in under a week. Great support too.
    Talk to Craig if you want to know more.

    We were looking at Solidfire for our 5000 desktop roll out. - Haven't had a chance to play with it yet.
    2017 Goals: VCP6-DCV | VCIX
    Blog: https://readysetvirtual.wordpress.com
  • EssendonEssendon Member Posts: 4,546 ■■■■■■■■■■
    Yeah Craig was here a couple of weeks ago. Phil Nass was here too, both very knowledgeable guys. Cannot fault the product, the price point just makes them impossible to overlook.
    NSX, NSX, more NSX..

    Blog >> http://virtual10.com
  • kj0kj0 Member Posts: 767
    Essendon wrote: »
    Yeah Craig was here a couple of weeks ago. Phil Nass was here too, both very knowledgeable guys. Cannot fault the product, the price point just makes them impossible to overlook.
    Was Craig in at the workplace or in Melbourne - Seen him flying all over the place recently.

    What is there support like? 4 x 24 x7?
    2017 Goals: VCP6-DCV | VCIX
    Blog: https://readysetvirtual.wordpress.com
  • EssendonEssendon Member Posts: 4,546 ■■■■■■■■■■
    Yeah he came over to our office the other day. Pure are expanding rapidly, hence why you see him all over the APAC region.

    The support is something like that, yes, I guess there must be various levels.
    NSX, NSX, more NSX..

    Blog >> http://virtual10.com
  • bertiebbertieb Member Posts: 1,031 ■■■■■■□□□□
    Just wanted to chime in that I've setup and used the EMC XtremIO devices Ess. I found them very easy to use, and the java problems and the fact you need a separate management VM per appliance a pain. A potential issue with the early adopters (like the ones I deployed) running 2.4 or earlier and wanting to update is the process isn't exactly simple, check out various well documented article on the t'internet mate if you want to know more.
    Another issue I experienced is a customer with a single xbrick wanting to add more space (another xbrick). Oops, thats a full on XIO cluster rebuild required then, certainly on the earlier operating systems. I've not used them with the latest and greatest operating system though but yes, very expensive!
    The trouble with quotes on the internet is that you can never tell if they are genuine - Abraham Lincoln
  • jibbajabbajibbajabba Member Posts: 4,317 ■■■■■■■■□□
    Essendon wrote: »
    Checking out PureStorage for another part of the company, cheaper, simpler interface and just as easy to manage. Top product, we had it delivered and setup in under a week. Great support too.

    A week ? Are we talking about the same PureStorage ? Should be done by lunch day #1 :p

    Pure is great - almost too simple lol ...
    My own knowledge base made public: http://open902.com :p
  • DeathmageDeathmage Banned Posts: 2,496
    Essendon wrote: »
    Not really, with the price of flash coming closer to that of spinning disk, things are more affordable than one may think. Server side cache is okay as long as your workloads arent large, we have over 6000 desktops, flash is beginning to ease things for us (agreed that there are things that should've been done better initially). I totally realize there are blades these days with more than 1TB RAM, but when you dont have that kind of gear, flash is a viable option.

    I just recommenced 4 Dell R720's with 2 TB's of RAM each for a current 16 server physical cluster with 256 GB SSD's on each server. They have databases on them so keeping the page files on SSD's seems way more logical to me than keeping them on the proposed Equalogic. :)

    and I got them to get vCOPS, so that will be useful for-sure for IOPS control. Only thing I don't like is you need the cluster to run for a few days for vCOPS to generate feedback.

    Hopefully iSCSI will be suffice for the IOPS demands of these databases inside ESXi, if not I suppose fiber channel is in order. Waiting for the Dell dpack to give me IOPS results. :)
  • dave330idave330i Member Posts: 2,091 ■■■■■■■■■■
    Deathmage wrote: »
    I just recommenced 4 Dell R720's with 2 TB's of RAM each for a current 16 server physical cluster with 256 GB SSD's on each server. They have databases on them so keeping the page files on SSD's seems way more logical to me than keeping them on the proposed Equalogic. :)

    Reserving SSD for pagefile is a bad idea. It basically means the VM wasn't allocated enough RAM, so it has to page (i.e. VM wasn't sized properly). SSD should generally be used as read/write cache.
    2018 Certification Goals: Maybe VMware Sales Cert
    "Simplify, then add lightness" -Colin Chapman
  • DeathmageDeathmage Banned Posts: 2,496
    dave330i wrote: »
    Reserving SSD for pagefile is a bad idea. It basically means the VM wasn't allocated enough RAM, so it has to page (i.e. VM wasn't sized properly). SSD should generally be used as read/write cache.

    So the articles I found on the VMware forum about putting the page file on the local SSD's (for a database server) vs having them reside inside of the VM's datastore that are stored on a external array i.e a SAN would actually cause degraded performance?

    but if that's the case, read/write IOPS cache does make sense. I suppose if the storage fabric is at 10Gbit's then the bottleneck would be the southbridge if there ever was a bottleneck.
  • dave330idave330i Member Posts: 2,091 ■■■■■■■■■■
    Deathmage wrote: »
    So the articles I found on the VMware forum about putting the page file on the local SSD's (for a database server) vs having them reside inside of the VM's datastore that are stored on a external array i.e a SAN would actually cause degraded performance?

    The local SSD solution will perform better if the VM access its pagefile, but do you really want your VM to access pagefile?
    2018 Certification Goals: Maybe VMware Sales Cert
    "Simplify, then add lightness" -Colin Chapman
  • EssendonEssendon Member Posts: 4,546 ■■■■■■■■■■
    Deathmage wrote: »
    I just recommenced 4 Dell R720's with 2 TB's of RAM each for a current 16 server physical cluster with 256 GB SSD's on each server. They have databases on them so keeping the page files on SSD's seems way more logical to me than keeping them on the proposed Equalogic. :)

    and I got them to get vCOPS, so that will be useful for-sure for IOPS control. Only thing I don't like is you need the cluster to run for a few days for vCOPS to generate feedback.

    Hopefully iSCSI will be suffice for the IOPS demands of these databases inside ESXi, if not I suppose fiber channel is in order. Waiting for the Dell dpack to give me IOPS results. :)

    Yeah you want your VMs to perform optimally all the time, hand out those SSDs as cache, not as fast paging file storage. Paging happens when something's out of resources, why not give it more to begin with?

    As for iSCSI being worse off than FC, I dont think so. If done well, it's just as good.

    For vROps (it's now called that, not vCOps!), it absolutely needs a while to run before it can provide feedback. It has to see past trending data to make any recommendations, how else can it provide feedback.
    NSX, NSX, more NSX..

    Blog >> http://virtual10.com
  • kj0kj0 Member Posts: 767
    Deathmage wrote: »
    Hopefully iSCSI will be suffice for the IOPS demands of these databases inside ESXi, if not I suppose fiber channel is in order. Waiting for the Dell dpack to give me IOPS results. :)
    Fiber Channel is old. Nothing wrong with iSCSI, never come across an issue with it before, and much easier to manage, and a lot less mess.
    2017 Goals: VCP6-DCV | VCIX
    Blog: https://readysetvirtual.wordpress.com
  • dave330idave330i Member Posts: 2,091 ■■■■■■■■■■
    Essendon wrote: »
    As for iSCSI being worse off than FC, I dont think so. If done well, it's just as good.

    iSCSI can't overcome the TCP/IP protocol limitations (lossy & overhead). FC doesn't have the same problems.
    2018 Certification Goals: Maybe VMware Sales Cert
    "Simplify, then add lightness" -Colin Chapman
  • EssendonEssendon Member Posts: 4,546 ■■■■■■■■■■
    Agreed there Dave, mitigating ways would be to use a 10Gb switch with flow control, true iSCSI HBA and have no other traffic on it. Smaller companies will be likelier to use iSCSI with gear being possibly less expensive. Let's talk about AFA's here, shall we?!
    NSX, NSX, more NSX..

    Blog >> http://virtual10.com
  • DeathmageDeathmage Banned Posts: 2,496
    Sorry, I'm horrible at staying on topic! icon_rolleyes.gif

    But I do see your logic, if I design the cluster correctly, it shouldn't ever need to page. icon_wink.gif

    Sometimes I over-think things. icon_sad.gif
  • EssendonEssendon Member Posts: 4,546 ■■■■■■■■■■
    bertieb wrote: »
    Just wanted to chime in that I've setup and used the EMC XtremIO devices Ess. I found them very easy to use, and the java problems and the fact you need a separate management VM per appliance a pain. A potential issue with the early adopters (like the ones I deployed) running 2.4 or earlier and wanting to update is the process isn't exactly simple, check out various well documented article on the t'internet mate if you want to know more.
    Another issue I experienced is a customer with a single xbrick wanting to add more space (another xbrick). Oops, thats a full on XIO cluster rebuild required then, certainly on the earlier operating systems. I've not used them with the latest and greatest operating system though but yes, very expensive!

    Thanks for dropping by mate. Yeah a small XMS server is needed for them, a pain yes, a minor one though for the return you get. Adding certs to the XMS servers (Xbricks Mgmt Servers) can only be done by Support, not that I wanted to do it anyway, but they should let customers do it themselves. Tell me about the firmware upgrade process though, there was this huge ruckus that was raised when mgmt learned the upgrade to 3.0 was destructive, fortunately we hadnt moved any data yet. EMC promised to install a second array for the migration and do it all for us (this saved them!). We upgraded the firmware to 3.0 first, then began to move VMs over.

    Didnt know about the destructive adding of another Xbrick! That's going to be a problem, eager to find out more!

    As for the price, yes they are really really expensive. Apparently Pure's array of the same specs is a fraction of the price!
    NSX, NSX, more NSX..

    Blog >> http://virtual10.com
  • EssendonEssendon Member Posts: 4,546 ■■■■■■■■■■
    jibbajabba wrote: »
    A week ? Are we talking about the same PureStorage ? Should be done by lunch day #1 :p

    Pure is great - almost too simple lol ...

    LOL! Yeah one week from them coming over to discuss, sort typical things out with management, deliver and install. They reckon they can install within a day or so for any orders once they are 'in'.
    NSX, NSX, more NSX..

    Blog >> http://virtual10.com
  • EssendonEssendon Member Posts: 4,546 ■■■■■■■■■■
    Deathmage wrote: »
    Sorry, I'm horrible at staying on topic! icon_rolleyes.gif

    But I do see your logic, if I design the cluster correctly, it shouldn't ever need to page. icon_wink.gif

    Sometimes I over-think things. icon_sad.gif

    I see where you are coming from, but it's best to take a step back and think about these things. Thing is, with the way stuff's tightly integrated these days, one bad design decision suddenly snowballs into something larger and more ominous. At one of my jobs, someone bumped up the number of CPU's on an SQL machine from 8 to 16 because some queries would take so long. The moment he restarted the SQL VM, bam, everything slowed down. The auditors were there too by chance doing their yearly thing, and I can tell you things didnt turn out too well. This is just an example mate indicating how things can go real bad double-quick!
    NSX, NSX, more NSX..

    Blog >> http://virtual10.com
  • DeathmageDeathmage Banned Posts: 2,496
    Essendon wrote: »
    I see where you are coming from, but it's best to take a step back and think about these things. Thing is, with the way stuff's tightly integrated these days, one bad design decision suddenly snowballs into something larger and more ominous. At one of my jobs, someone bumped up the number of CPU's on an SQL machine from 8 to 16 because some queries would take so long. The moment he restarted the SQL VM, bam, everything slowed down. The auditors were there too by chance doing their yearly thing, and I can tell you things didnt turn out too well. This is just an example mate indicating how things can go real bad double-quick!

    This is why I LOVE poising questions on here, I find out the correct answer, thanks Manny. :)
    Essendon wrote: »
    Thanks for dropping by mate. Yeah a small XMS server is needed for them, a pain yes, a minor one though for the return you get. Adding certs to the XMS servers (Xbricks Mgmt Servers) can only be done by Support, not that I wanted to do it anyway, but they should let customers do it themselves. Tell me about the firmware upgrade process though, there was this huge ruckus that was raised when mgmt learned the upgrade to 3.0 was destructive, fortunately we hadnt moved any data yet. EMC promised to install a second array for the migration and do it all for us (this saved them!). We upgraded the firmware to 3.0 first, then began to move VMs over.

    Didnt know about the destructive adding of another Xbrick! That's going to be a problem, eager to find out more!

    As for the price, yes they are really really expensive. Apparently Pure's array of the same specs is a fraction of the price!

    Dell did something similar for me at my last job, we have a very old Equalogic SAN (6+ years old) and the firmware was so outdated we couldn't update the firmware in the broswer because that version of the firmware was didn't support the available version of java, so it turned into a real pickle.

    So we had Dell lend us a SAN to make a 1 to 1 copy of the SAN that had a update firmware saved in flash, once the data was copied over we flashed the Dell provided SAN and then copied over the data to the New Equalogic 6500 (bear in mind it did take 96 hours since this SAN was literally jammed to the max of a 20 TB SAN with a very outdated controller, it was painful). Had we not had that extra layer of security we would have be SCREWED if the SAN **** the brick...it was kind of scary since my predecessor never flashed the SAN when he got it, he just deployed it, to make matters worse he removed the reserved space from the SAN so we couldn't manage it correctly... learned pretty much all I know now from Equalogic that day from those two Dell storage engineers, the rest I've acquired on my own.
  • kj0kj0 Member Posts: 767
    Essendon wrote: »
    At one of my jobs, someone bumped up the number of CPU's on an SQL machine from 8 to 16 because some queries would take so long. The moment he restarted the SQL VM, bam, everything slowed down.
    I hope he had his access revoked! Rookie move.
    2017 Goals: VCP6-DCV | VCIX
    Blog: https://readysetvirtual.wordpress.com
  • bertiebbertieb Member Posts: 1,031 ■■■■■■□□□□
    Essendon wrote: »
    Didnt know about the destructive adding of another Xbrick! That's going to be a problem, eager to find out more!

    I think its because there's no infiniband switch on a single XBrick deployment (the controllers are directly connected in this model), and to expand/add other XBricks you need one. I'm not sure on the exact process but expect downtime at least, it was something pointed out to me by the EMC engineer whilst on-site.
    The trouble with quotes on the internet is that you can never tell if they are genuine - Abraham Lincoln
  • dave330idave330i Member Posts: 2,091 ■■■■■■■■■■
    So what type of RAID do you setup with AFA? I assume some flavor of 5 or 6?
    2018 Certification Goals: Maybe VMware Sales Cert
    "Simplify, then add lightness" -Colin Chapman
  • EssendonEssendon Member Posts: 4,546 ■■■■■■■■■■
    You dont setup any RAID's, with AFA's you just hand out LUNs and that's it. As for the type of RAID the device uses:

    NSX, NSX, more NSX..

    Blog >> http://virtual10.com
  • azjagazjag Member Posts: 579 ■■■■■■■□□□
    A week ? Are we talking about the same PureStorage ? Should be done by lunch day #1

    Pure is great - almost too simple lol ...

    I'm sure there is a reason it took a week.

    Consulting
    Currently Studying:
    VMware Certified Advanced Professional 5 – Data Center Administration (VCAP5-DCA) (Passed)
    VMware Certified Advanced Professional 5 – Data Center Design (VCAP5-DCD)
  • SimonD.SimonD. Member Posts: 111
    As part of our vCD deployment last year we were utilising NetApps and were really struggling with performance (inherited NetApp prior to knowing IOPs requirements).

    We got a Pure FA420 installed as a demo unit, it's that good that we didn't give it back, purchased the demo unit and one other unit for our 2nd DC.
    iSCSI all the way for us, just a shame that NFS still isn't ready as we do have still have some NFS requirements.

    As a side note, we used to have HDS for our very heavy Oracle environment (we do more TPS than all the European Stock Exchanges, not bad for an e-gaming company) but moved to Pure for those as well, we did go down the FC route for those two arrays tho.

    I just love how easy it is to use the Pure, no need for dedicated storage engineers any more (something we need for the 170+ NetApp filers we own) and it allows us to provision luns when we need rather than waiting on the ticket to be actioned :)
    My Blog - http://www.everything-virtual.com
    vExpert 2012\2013\2014\2015
  • tstrip007tstrip007 Member Posts: 308 ■■■■□□□□□□
    I want one of those Pure's.
Sign In or Register to comment.