Use Case: vSAN vs Array-based SAN for Production

DeathmageDeathmage Banned Posts: 2,496
Hey all,

So at home last night I was testing the piss out of the home lab.

Some facts about Home-lab as of recent, I've been busy. :)

1. I've setup the vSAN as a Tier 1 Storage Profile, the QNAP 6 bay NAS in RAID 10 with two 120 GB SSD's for Cache over a Dual 10G FCoE fabric as Tier 2 Storage Profile, this one has a WD Mirror MY Cloud 2 TB NAS for Veeam Backups of home-lab. The Pre-existing QNAP 4 bay NAS in RAID 5 is Tier 3 Storage Profile
2. vSAN and FCoE share the same 10G FCoE connection into each of the (3) R610's just over different vLAN's.

3. Each of the R610's have 48 GB's of RAM each from 6 x 8 GB PC 12800 DIMM's, as-well has a riser Quad 1G nic and a Intel Dual 10G FCoE Converged Nic.

4. A Raspberry Pi 2 has been setup with syslog, netflow, SMTP, mySQL, and a DC for my home-lab domain on Ubuntu Server 14.04.

5. I've been making use of the Cisco 2960G's to make up a redesigned 6 vLAN networking: a) home-network, b) wireless, c) ESXi servers and Primary domain, d) Horizon VDI, e) domain 2, f) domain 3. As-well as each R610 has a Quad 4 port Etherchannel with IP hash with the non-storage networks, like home-network, servers, and vMotion.

6. Two Wyse C10LE thin clients with Samsung 19 inch wide screens, punchdowns for stations are re-directed to vLAN's on a as needed basis for VDI testing.


So with all this now setup, I've been playing around with vSAN and I have to say SDRS really does favor vSAN allot and if I intentional move a VM to Tier 2 storage, I had to really mess with the IO and latency tolerances for it to stay on the QNAP. The throughput thus far of the vSAN is insane, plus with the Flash Cache I plan on adding in a few weeks once the 3 SSD's are ordered I got a feeling Tier 2 and 3 will never been used unless I put SDRS in manual mode.

I actually like it cause I can put my database server on the vSAN and then put my DC on the Tier 2 and keep all my files on Tier 3. But I had no idea that vSAN would have this much throughput and SDRS would actually prefer it over a equally fast Tier 2 array that could in reality be a Tier 1 array.

I know some of you are like, well you spent all this money only for a virtual SAN to destroy your arrays, perhaps. But I have plans to make a 30 VM home-lab with 3 Windows domains, I paid for 30 Windows 2012 Standard licenses, that can be downgraded to 2008 R2, last year so that my VM cap, 30! I wonder what I could do with that... I wonder...icon_rolleyes.gif

With this being said, has any one thus far started using vSAN in production environments vs the traditional SAN?

If so, long-term could vSAN make companies ponder if a SAN array is even needed? - I mean can you even compete with a backplane of a local server vs going over glass? my test at home, even on aging Dell R610's internal backplanes seem to be way faster than even a 10G glass connection, so it begs the question, could vSAN be a game-changer?


  • dave330idave330i Member Posts: 2,091 ■■■■■■■■■■
    The biggest limitation with vSAN atm is recovery w/out vCenter. It can be done, but PITA.
    2018 Certification Goals: Maybe VMware Sales Cert
    "Simplify, then add lightness" -Colin Chapman
  • OctalDumpOctalDump Member Posts: 1,722
    That's interesting stuff. I've heard from a few engineers that vSAN isn't really production ready.

    I think highend SAN might have a few tricks up its sleeve. I'm thinking of stuff like site to site asynchronous replication for DR/BC. Backup might be easier, and things like transparent multi level tiering. It also makes me nervous when vCenter become increasingly a single point of failure.
    2017 Goals - Something Cisco, Something Linux, Agile PM
Sign In or Register to comment.