Options

iSCSI and switch high availibility

PiotrIrPiotrIr Member Posts: 236
Hi

Could you advice me if this configuration will work?
I want to use 4xDL380 G5 (as Hyper-V hosts and clustering) with 2xNIC on each (working in team) for iSCSI network. Servers are connected to two switches DELL PowerConnect 2724. Every NIC from team plugged to different switch. I’m going to use DELL MD3000i with two controllers (4 ports) connected to the some switches. Everything to eliminate single point of failure (NICs and Switches).
I wonder if these switches will work correctly. I found examples of similar configuration on DELL webpage but with PowerConnect 5000 which is 10 times more expensive. I don’t want to spend money for something which is not useful.
Could you tell me if it will work? Maybe some additional advices about this configuration? Is any better solution?
And just in case (I know we can’t use teaming for Hyper-V) maybe someone found solution for high availability for public Hyper-V network similar like for iSCSI network?

Best Regards

Comments

  • Options
    MishraMishra Member Posts: 2,468 ■■■■□□□□□□
    iSCSI doesn't care what switch it is on.

    What do you think isn't going to work? Why are you using Dell switches? What is the standard switch brand in your company?
    My blog http://www.calegp.com

    You may learn something!
  • Options
    PiotrIrPiotrIr Member Posts: 236
    Problem is not in switch for iSCSI but switch whitch can provide fail-over. As far as I know when you use teaming you should connect both NIC to the same switch. However some switches provide functions whitch allow you configuration like thath. And because I have never tested this I wonder if it will work.
    Why DELL? - very good price and hoppefully good quality (I would like to know if it is trueicon_smile.gif)
    Kind Regards
  • Options
    dynamikdynamik Banned Posts: 12,312 ■■■■■■■■■□
    How would you have fault-tolerance if you connected both nics to the same switch?
  • Options
    astorrsastorrs Member Posts: 3,139 ■■■■■■□□□□
    Use the teaming software in the NIC driver to do adapter failover (or whatever the NIC vendor calls it) that way switch support is irrelevant. You will not be able to do any load balancing between the NICs as that usually requires expensive switches that are chained together and act as 1 switch (share tables, etc).

    As for dynamik's question, an adapter or switch port is more likely to fail than an entire switch, so it's better than nothing.
  • Options
    PiotrIrPiotrIr Member Posts: 236
    Unfortunately NIC teaming is not certified for Windows Server 2008 cluster and Hyper-V. MPIO will provide iSCSI network redundancy. Heart Beat network is also not support in NIC teaming. But in this case I can use two NIC in different subnets. But what about Hyper-V? How I can provide fail over for guests? Any idea?
  • Options
    astorrsastorrs Member Posts: 3,139 ■■■■■■□□□□
    PiotrIr wrote:
    Unfortunately NIC teaming is not certified for Windows Server 2008 cluster and Hyper-V. MPIO will provide iSCSI network redundancy. Heart Beat network is also not support in NIC teaming. But in this case I can use two NIC in different subnets. But what about Hyper-V? How I can provide fail over for guests? Any idea?
    I have implemented probably close to 50 MSCS clusters over the years and every single one of them used teaming. No client would ever tolerate a single point of failure like that. While Microsoft's "official" support policy is they are not supported - the reality is all they ask you to do is break the team if they think it might be related and then continue working on the problem with them.

    I've used both Broadcom and Intel's adapter teaming software in the past without incident. Heck I've even been forced to combine vendors NICs into teams (Broadcom internal NICs and Intel quad ports), and that works fine too.
  • Options
    HeroPsychoHeroPsycho Inactive Imported Users Posts: 1,940
    astorrs wrote:
    Use the teaming software in the NIC driver to do adapter failover (or whatever the NIC vendor calls it) that way switch support is irrelevant. You will not be able to do any load balancing between the NICs as that usually requires expensive switches that are chained together and act as 1 switch (share tables, etc).

    As for dynamik's question, an adapter or switch port is more likely to fail than an entire switch, so it's better than nothing.

    Forgive me in that I haven't done this with HyperV, but if it works the same way as VMware, you shouldn't team.

    MPIO doesn't work via unicast or multicast like a traditional NLB solution or NIC teaming. For iSCSI, you shouldn't use NIC teaming, but use each NIC individually and a multipathing client to handle load balancing and failover. In such a scenario, each NIC gets it's own IP; therefore, you don't do NIC teaming, and you could plug your NIC's into alternating switches, as you would with the alternate SAN controller NIC pairs.

    iSCSI traffic shouldn't be sharing NIC's with regular traffic anyway. Make sure you have two physical NIC's dedicated to iSCSI traffic, and VM traffic should go on other NIC's.
    Good luck to all!
  • Options
    HeroPsychoHeroPsycho Inactive Imported Users Posts: 1,940
    astorrs wrote:
    I have implemented probably close to 50 MSCS clusters over the years and every single one of them used teaming. No client would ever tolerate a single point of failure like that. While Microsoft's "official" support policy is they are not supported - the reality is all they ask you to do is break the team if they think it might be related and then continue working on the problem with them.

    I've used both Broadcom and Intel's adapter teaming software in the past without incident. Heck I've even been forced to combine vendors NICs into teams (Broadcom internal NICs and Intel quad ports), and that works fine too.

    Worked for Microsoft Product Support, and I've built many a cluster. NIC teaming is supported on everything but the heartbeat NIC in an MSCS cluster. iSCSI isn't supported with NIC teaming, but that's irrelevant to MSCS, as it's not supported with non-clustered iSCSI clients, either. Use MPIO instead.

    The only "Cluster" NIC teaming isn't supported on are NLB arrays for public traffic.
    Good luck to all!
  • Options
    astorrsastorrs Member Posts: 3,139 ■■■■■■□□□□
    HeroPsycho wrote:
    astorrs wrote:
    Use the teaming software in the NIC driver to do adapter failover (or whatever the NIC vendor calls it) that way switch support is irrelevant. You will not be able to do any load balancing between the NICs as that usually requires expensive switches that are chained together and act as 1 switch (share tables, etc).

    As for dynamik's question, an adapter or switch port is more likely to fail than an entire switch, so it's better than nothing.

    Forgive me in that I haven't done this with HyperV, but if it works the same way as VMware, you shouldn't team.

    MPIO doesn't work via unicast or multicast like a traditional NLB solution or NIC teaming. For iSCSI, you shouldn't use NIC teaming, but use each NIC individually and a multipathing client to handle load balancing and failover. In such a scenario, each NIC gets it's own IP; therefore, you don't do NIC teaming, and you could plug your NIC's into alternating switches, as you would with the alternate SAN controller NIC pairs.

    iSCSI traffic shouldn't be sharing NIC's with regular traffic anyway. Make sure you have two physical NIC's dedicated to iSCSI traffic, and VM traffic should go on other NIC's.
    Sorry guess I wasn't clear. I wasn't suggesting teaming iSCSI NICs just for the LAN NICs.

    As for comparisons to ESX they're (unfortunately) totally different, you can think of Hyper-V and the way it does clusters as an extension of standard Windows with MSCS. You can't just add multiple pNICs to the vSwitch as you would with ESX. With Hyper-V you would have to fix the VMs to a specific pNIC (hence the desire for teaming).

    Here is Microsoft's semi-official position:

    "NIC Teaming is a capability provided by our hardware partners such Intel and Broadcom. Microsoft supports our partners who provide this capability. This is true whether the customer is running Windows, Exchange, SQL Hyper-V, etc. We'll have a detailed KB article about this coming out soon."

    http://virtualizationreview.com/blogs/weblog.aspx?blog=2296
  • Options
    HeroPsychoHeroPsycho Inactive Imported Users Posts: 1,940
    You can actually, but you must use the NIC bonding agent with your server, not do it within HyperV.
    Good luck to all!
  • Options
    astorrsastorrs Member Posts: 3,139 ■■■■■■□□□□
    HeroPsycho wrote:
    astorrs wrote:
    I have implemented probably close to 50 MSCS clusters over the years and every single one of them used teaming. No client would ever tolerate a single point of failure like that. While Microsoft's "official" support policy is they are not supported - the reality is all they ask you to do is break the team if they think it might be related and then continue working on the problem with them.

    I've used both Broadcom and Intel's adapter teaming software in the past without incident. Heck I've even been forced to combine vendors NICs into teams (Broadcom internal NICs and Intel quad ports), and that works fine too.

    Worked for Microsoft Product Support, and I've built many a cluster. NIC teaming is supported on everything but the heartbeat NIC in an MSCS cluster. iSCSI isn't supported with NIC teaming, but that's irrelevant to MSCS, as it's not supported with non-clustered iSCSI clients, either. Use MPIO instead.

    The only "Cluster" NIC teaming isn't supported on are NLB arrays for public traffic.

    This is all I meant:
    Using teaming on the public or client networks is acceptable. However, if problems or issues seem to be related to teaming, Microsoft Product Support Services will require that teaming be disabled. If this resolves the problem or issue, you must seek assistance from the hardware manufacturer.

    http://support.microsoft.com/kb/254101/
  • Options
    astorrsastorrs Member Posts: 3,139 ■■■■■■□□□□
    HeroPsycho wrote:
    You can actually, but you must use the NIC bonding agent with your server, not do it within HyperV.
    I know, isn't that what I've been saying and why this whole exchange started? ;)
  • Options
    HeroPsychoHeroPsycho Inactive Imported Users Posts: 1,940
    You may very well have been. I've been reading probably a bit too quickly. icon_lol.gif
    Good luck to all!
  • Options
    PiotrIrPiotrIr Member Posts: 236
    Many thanks for yours answers. I have clarity nowicon_smile.gif
    Have a good day and Best Regards
Sign In or Register to comment.