Network Design Question

kj0kj0 Member Posts: 767
Hi All.

Just setting up our new Virtual Infrastructure today (testing everything first before putting it into production) and I was just after an opinion on how best to set up the network redundancy. I saw it somewhere before and now can't find it.

The infrastructure components are the same in both kits, except of course the models and internals. So both have 3 hosts, 2 SANs and 2 switches for iSCSI.

The current kit has 4 pNIC gig ports onboard, and 4 gig on a PCI card - Per host:

The 4 onboard are all going to the core switches for the sites network, and the 4 on the PCI are for iSCSI.


The new kit has 2 x SFP 10Gb and 2 x 10gb Copper onboard and 2 x SFP 10Gb PCI (x1).

There are two design options we were talking about today, the one I'm wanting to go with is Option B.

A) both Onboard SFPs go to site network and both SFPs on the PCI go to iSCSI.

B) port 1 on both onboard and PCI go to iSCSI and Port 2 on both go to Site Network.



I see option B offering and a redundant network for what is available in the event of either the PCI dies, or the onboard. option A offers for exmaple the pCI to die killing the iSCSI connection for that host.


The opposition on option B was something to do with not being able to link two different cards together that are also sharing another NIC elsewhere.



Would be glad to hear you opinions.

Thanks
2017 Goals: VCP6-DCV | VCIX
Blog: https://readysetvirtual.wordpress.com

Comments

  • EssendonEssendon Member Posts: 4,546 ■■■■■■■■■■
    I'd strongly go with Option B too, you need to eliminate any single points of failure (as you've already pointed out). Whoever opposed Option B, were they able to back it up with any evidence? You can and should go with Option B, NIC's can be shared and we have them in production - works a treat. Just remember to have matching uplinks on all hosts, maintains your sanity when troubleshooting an issue at 2am!
    NSX, NSX, more NSX..

    Blog >> http://virtual10.com
  • kj0kj0 Member Posts: 767
    3 hosts shouldn't be a problem with Uplinks, and luckily being a school. 7 - 4 and any issues can be fixed the next day (Luckily).

    I"m fairly certain it was a Trainsignal vSphere networking series I saw it in. Oh well.


    Thanks bud, Will probably keep this layout.
    2017 Goals: VCP6-DCV | VCIX
    Blog: https://readysetvirtual.wordpress.com
  • SimonD.SimonD. Member Posts: 111
    I would have recommended option B as well, we have 2 port 10gig cards and split out from them even tho they are supposed to be two different NIC's per physical card.
    My Blog - http://www.everything-virtual.com
    vExpert 2012\2013\2014\2015
  • JBrownJBrown Member Posts: 308
    I have 16 x1Gbps (yes, sixteen) on each host in our VMware infrastructure, 4 on board and 3 4x1Gbps Broadcom cards. I had a 1st riser failing on one of the hosts (its cisco UCS 240 servers), it took down 2 of the 4 port cards with it. The only reason we kept going was due to redundancy i have in place, 2nd riser with 1 4 port 1Gbps and the 4 on board nics handled all the traffic with no hiccup whatsoever.

    If you want a good redundancy option B is the only way to go. It's not just a recommendation.
  • kj0kj0 Member Posts: 767
    Yeah, I'm all for Option B,hence why I set it up that way when I plug it all in, but the question was just raised that there was a thought that different cards couldn't be linked as such. (maybe ESX 4.0 thing - We are still running that and about to cut it over to 5.5).

    Thanks guys, been very helpful as always.
    2017 Goals: VCP6-DCV | VCIX
    Blog: https://readysetvirtual.wordpress.com
  • tomtom1tomtom1 Member Posts: 375
    One tip though, perhaps not directly relevant. Enable NIOC on those 10 gigabit nics if you have the Enterprise Plus licensing and the distributed vSwitch in place. By default, a vMotion can take up to 8 gigabit of traffic on a 10 gigabit network, so you wouldn't want your vMotion operations draining resources.
Sign In or Register to comment.