vSAN: Network Requirements

DeathmageDeathmage Banned Posts: 2,496
Hey guys,

So I know from reading, since I've never done a vSAN before, that VMware calls for a 10G connection per host connecting to a OOB Storage Fabric. What I'm curious about is in a Home-lab is a 10G connection truly needed or would a Quad 1G bonded link suffice per host vs a 10G link to an obvious FC or FCoE Infrastructure.

The only thing I guess I'm concerned about is cache size on the OOB switches, since that's just a **** ton of traffic....

I could do it to a 10G iSCSI-capable OOB Fabric but if I'm going with 10G in the home-lab at that point I mind as-well just make it a FCoE or FC fabric. But crious if (4) 1Gbit connections in a etherchannel would suffice cause then I could just find a 2960G and bump it's memory on the switch for a larger cache.

This is purely for a Home-lab, in Production I would go for a Dual 10G FC or FCoE configuration. icon_wink.gif

Thoughts?

Comments

  • joelsfoodjoelsfood Member Posts: 1,027 ■■■■■■□□□□
    10g isn't required, it is just recommended. 1g is fine (in theory) though it will depend on the rate of change of your data.

    For the record, FC doesn't come in 10G(2/4/8/16/32), and you don't really update cache on a switch, as in general, data won't leave the ASIC/SoC, so you just have the cache built into that (though different QoS policies can change the assignment of the "buckets")
  • DPGDPG Member Posts: 780 ■■■■■□□□□□
    vSAN utilizes local storage and ethernet connections between the hosts. You can't user FC or FCoE.
  • DeathmageDeathmage Banned Posts: 2,496
    joelsfood wrote: »
    10g isn't required, it is just recommended. 1g is fine (in theory) though it will depend on the rate of change of your data.

    For the record, FC doesn't come in 10G(2/4/8/16/32), and you don't really update cache on a switch, as in general, data won't leave the ASIC/SoC, so you just have the cache built into that (though different QoS policies can change the assignment of the "buckets")

    Thanks for the information, was aiming for FCoE at 10G, but it's good to know the correct sizing for FC, I get them confused sometimes with the verbiage. :D
    DPG wrote: »
    vSAN utilizes local storage and ethernet connections between the hosts. You can't user FC or FCoE.

    vSAN requires 3 hosts minimal to make up a vSAN, wouldn't it need a Storage backplane over a OOB fabric to communicate with each other? icon_wink.gif
  • DPGDPG Member Posts: 780 ■■■■■□□□□□
    Deathmage wrote: »
    vSAN requires 3 hosts minimal to make up a vSAN, wouldn't it need a Storage backplane over a OOB fabric to communicate with each other? icon_wink.gif

    The vSAN network runs over IP.
  • joelsfoodjoelsfood Member Posts: 1,027 ■■■■■■□□□□
    VSAN uses the IP network, as DPG mentioned, and essentially runs replication over Ethernet. Technically it doesn't even need 3 hosts, only needs 2 and a witness appliance.

    If you want to just check it out and get a feel for it, you don't even have to dedicate physical hosts/disks. William Lam has an appliance for download that has a prebuilt 3 node VSAN host> I expect he'll be updating it for VSAN 6.2 when the new version is officially out.
  • DPGDPG Member Posts: 780 ■■■■■□□□□□
    Also, don't bother with link aggregation for the vSAN network. You won't see much of a difference in performance.
  • DeathmageDeathmage Banned Posts: 2,496
    I guess the next thought is, do you see a benefit if you use Hardware RAID for like (4) 600 GB 10k Raptors in a RAID 10 on each host or just use the vSAN built-in distributed RAID. Curious if you could use them both.
  • DPGDPG Member Posts: 780 ■■■■■□□□□□
    No benefit since vSAN is optimized for single spindles.
  • joelsfoodjoelsfood Member Posts: 1,027 ■■■■■■□□□□
    Utilizing passthrough controllers (giving VSAN individual access to each disk) is generally just as fast as using controller in raid 0 mode, plus it lets VSAN handle hot plug of failed drives, etc. As DPG mentioned, VSAN is designed around direct access to the disks.
  • DeathmageDeathmage Banned Posts: 2,496
    Well I'm seeing that it doesn't even support bonding, so that's kind of poo-poo.

    But it does support Jumbo, but with TSO and LPO the benefits would be marginal, I'll probably still enable it anyways.

    Kind of curious about making a TCP/IP Stack just for vSAN in ESXi 6.0 (since now you can) and applying this to the vDS vSAN VMKernal for that vSAN Port Group, kind of curious of these customization network stacks and want to see if they help with throughput in VMware. I have a Bigfoot 2100 Gaming Nic in my PC at home that does the exact same thing with bypassing the Windows Stack and the throughput is amazing...
    joelsfood wrote: »
    Utilizing passthrough controllers (giving VSAN individual access to each disk) is generally just as fast as using controller in raid 0 mode, plus it lets VSAN handle hot plug of failed drives, etc. As DPG mentioned, VSAN is designed around direct access to the disks.
    DPG wrote: »
    No benefit since vSAN is optimized for single spindles.

    Ahhh, so essentially, if I did hardware RAID it could actually mess with the vSAN configuration.
  • TheProfTheProf Users Awaiting Email Confirmation Posts: 331 ■■■■□□□□□□
    You're not supposed to do any RAID configurations for VSAN. All of the stripping and replication occurs with the use of storage based policies that you create initially within vCenter.
  • DeathmageDeathmage Banned Posts: 2,496
    This is sweet, I don't need to sacrifice space for redundancy. :)
  • KonfliktKonflikt Member Posts: 43 ■■■□□□□□□□
    with all-flash configuration the 10GbE is recommended and one gig is not supported.
    with hybrid, 1 gig Eth can be used, but 10 gig is suggested.
    for 2013: [x] 3x VCA, [x] VCAP5-DCA, [-] VCAP-DCD - failed. PASSED in 2014
    for 2014: [x] BACP, [x] SCP, [x] 70-409, [x] VCAP-DCD
    for 2015: [x] VCP6-DCV,
    for 2016: [x] upgrade VCAPs to VCIX6-DCV, [x] CCNA [-]
    2019: NEW job, back to again to the datacenter area:)
    My Virtual blog: vthing.wordpress.com
  • DeathmageDeathmage Banned Posts: 2,496
    Konflikt wrote: »
    with all-flash configuration the 10GbE is recommended and one gig is not supported.
    with hybrid, 1 gig Eth can be used, but 10 gig is suggested.


    well I have just one 120 GB SSD per host so I guess 1G will suffice with Jumbo, TSO and LRO.
  • DeathmageDeathmage Banned Posts: 2,496
    vSAN, storage groups, is it Atleast one SSD per host or one SSD per hard drive?

    Like if a R610 has 6 open slots, could I do 1 + 5 or 3 + 3?
  • joelsfoodjoelsfood Member Posts: 1,027 ■■■■■■□□□□
    At least one SSD per host. SSD capacity per host should be at least 10% of size of spinning disk capacity. IE, 2x600GB SAS spinning disk to go with your 120GB SSD
  • DeathmageDeathmage Banned Posts: 2,496
    be interested if this Black2 drive is supported....

    http://www.amazon.com/dp/B00GSJ9X4Q
  • TheProfTheProf Users Awaiting Email Confirmation Posts: 331 ■■■■□□□□□□
    If you're using SSD for caching, it's one SSD per Disk Group and you can have multiple disk groups in a host. Which means technically speaking, you can have multiple SSDs in one host. The caching tier should be at least 10% of the total storage of your disk group.
  • am3rig0am3rig0 Member Posts: 11 ■□□□□□□□□□
    I run the robo version of vSAN in my home lab and this definitely doesn't need more than 1gbps. ROBO requires 2 hosts only, but you need a third witness server. Cormac Hogan is your man when it comes to vSAN, so check his website.

    For non-robo setup, if you're planning to use this for production (even home lab production), you want 4 hosts (even though it will work with 3 only). See here for more.

    In hybrid scenario, single disk group can have 1 ssd only and up to 7 magnetic disks. The rule of thumb, as mentioned before, is that you should have around 10% of SSD cache to back your spinning disks storage. So you want for keep to under 1.5TB in spinning disks and you should be fine, it all depends on how your cache is utilised really. Use SexiGraf for vSAN monitoring, it's awesome (once the new version of vSAN hits GA you shouldn't need it anymore, as monitoring is improved).

    If your server's controller doesn't do pass-through, you will need to pass all your disks manually as RAID0 arrays.

    You can check hardware compatibility here - http://partnerweb.vmware.com/service/vsan/all.json. This is the final source of knowledge when it comes to what is or isn't compatible with your hardware platform, on the version of vSphere you're using. If something isn't supported, it still might work, but you can hit all sorts of issues with performance, etc. have a read here to get an idea. Queue depth on your raid controller needs to be adequately big, 600 is a good start, the more the better.

    With all the recent developments, vSAN is growing to be a great product, so definitely worth checking out!
  • DeathmageDeathmage Banned Posts: 2,496
    am3rig0 wrote: »
    I run the robo version of vSAN in my home lab and this definitely doesn't need more than 1gbps. ROBO requires 2 hosts only, but you need a third witness server. Cormac Hogan is your man when it comes to vSAN, so check his website.

    For non-robo setup, if you're planning to use this for production (even home lab production), you want 4 hosts (even though it will work with 3 only). See here for more.

    In hybrid scenario, single disk group can have 1 ssd only and up to 7 magnetic disks. The rule of thumb, as mentioned before, is that you should have around 10% of SSD cache to back your spinning disks storage. So you want for keep to under 1.5TB in spinning disks and you should be fine, it all depends on how your cache is utilised really. Use SexiGraf for vSAN monitoring, it's awesome (once the new version of vSAN hits GA you shouldn't need it anymore, as monitoring is improved).

    If your server's controller doesn't do pass-through, you will need to pass all your disks manually as RAID0 arrays.

    You can check hardware compatibility here - http://partnerweb.vmware.com/service/vsan/all.json. This is the final source of knowledge when it comes to what is or isn't compatible with your hardware platform, on the version of vSphere you're using. If something isn't supported, it still might work, but you can hit all sorts of issues with performance, etc. have a read here to get an idea. Queue depth on your raid controller needs to be adequately big, 600 is a good start, the more the better.

    With all the recent developments, vSAN is growing to be a great product, so definitely worth checking out!


    Sweet, thanks for the feedback. :)
  • DeathmageDeathmage Banned Posts: 2,496
    Pondering, I got one extra drive in my R610's, another 300 GB WD Enteprise 10k raptor or a 64/120 GB SSD for Flash Cache?

    Already have a Samsung Evo 120 GB SSD for the (3) 300 GB Raptors.

    I'd like to use Flash Cache on-top of vSAN if it's supported but it might be counter intuative.
  • am3rig0am3rig0 Member Posts: 11 ■□□□□□□□□□
    I don't think you will be able to enable vflash on a VM stored on vsan.

    But if you have a lot of ram, you could give pernix freedom a go, sub 0.1 ms latency is cool, but overkill most likely. Not sure if supported with vsan, so you would need to check documentation or reach out to them.
  • jdancerjdancer Member Posts: 482 ■■■■□□□□□□
    TheProf wrote: »
    You're not supposed to do any RAID configurations for VSAN. All of the stripping and replication occurs with the use of storage based policies that you create initially within vCenter.

    So, you set up your physical disks with NO configuration at all, correct? Not even JBOD?
  • jdancerjdancer Member Posts: 482 ■■■■□□□□□□
    Deathmage wrote: »
    Pondering, I got one extra drive in my R610's, another 300 GB WD Enteprise 10k raptor or a 64/120 GB SSD for Flash Cache?

    Already have a Samsung Evo 120 GB SSD for the (3) 300 GB Raptors.

    I'd like to use Flash Cache on-top of vSAN if it's supported but it might be counter intuative.

    I think it's also best practice to have a single disk for ESXi scratch partition as well.
  • DeathmageDeathmage Banned Posts: 2,496
    jdancer wrote: »
    I think it's also best practice to have a single disk for ESXi scratch partition as well.

    Indeed I have a solo 146 GB 10k drive for ESXi.
  • DeathmageDeathmage Banned Posts: 2,496
    Whelp ordered (3) 64 GB Sandisk SSD's today, will use them as Flash Cache, they arrive Wednesday. Thursday the 500ft spool of Cat6 arrives for the storage 1U 16 port punchdown and for the 1U networking 16 port punchdown, going ot make the lab wiring all neat and tidy vs the regualr patch cords I got going on right now.
Sign In or Register to comment.