PiotrIr wrote: Unfortunately NIC teaming is not certified for Windows Server 2008 cluster and Hyper-V. MPIO will provide iSCSI network redundancy. Heart Beat network is also not support in NIC teaming. But in this case I can use two NIC in different subnets. But what about Hyper-V? How I can provide fail over for guests? Any idea?
astorrs wrote: Use the teaming software in the NIC driver to do adapter failover (or whatever the NIC vendor calls it) that way switch support is irrelevant. You will not be able to do any load balancing between the NICs as that usually requires expensive switches that are chained together and act as 1 switch (share tables, etc). As for dynamik's question, an adapter or switch port is more likely to fail than an entire switch, so it's better than nothing.
astorrs wrote: I have implemented probably close to 50 MSCS clusters over the years and every single one of them used teaming. No client would ever tolerate a single point of failure like that. While Microsoft's "official" support policy is they are not supported - the reality is all they ask you to do is break the team if they think it might be related and then continue working on the problem with them. I've used both Broadcom and Intel's adapter teaming software in the past without incident. Heck I've even been forced to combine vendors NICs into teams (Broadcom internal NICs and Intel quad ports), and that works fine too.
HeroPsycho wrote: astorrs wrote: Use the teaming software in the NIC driver to do adapter failover (or whatever the NIC vendor calls it) that way switch support is irrelevant. You will not be able to do any load balancing between the NICs as that usually requires expensive switches that are chained together and act as 1 switch (share tables, etc). As for dynamik's question, an adapter or switch port is more likely to fail than an entire switch, so it's better than nothing. Forgive me in that I haven't done this with HyperV, but if it works the same way as VMware, you shouldn't team. MPIO doesn't work via unicast or multicast like a traditional NLB solution or NIC teaming. For iSCSI, you shouldn't use NIC teaming, but use each NIC individually and a multipathing client to handle load balancing and failover. In such a scenario, each NIC gets it's own IP; therefore, you don't do NIC teaming, and you could plug your NIC's into alternating switches, as you would with the alternate SAN controller NIC pairs. iSCSI traffic shouldn't be sharing NIC's with regular traffic anyway. Make sure you have two physical NIC's dedicated to iSCSI traffic, and VM traffic should go on other NIC's.
HeroPsycho wrote: astorrs wrote: I have implemented probably close to 50 MSCS clusters over the years and every single one of them used teaming. No client would ever tolerate a single point of failure like that. While Microsoft's "official" support policy is they are not supported - the reality is all they ask you to do is break the team if they think it might be related and then continue working on the problem with them. I've used both Broadcom and Intel's adapter teaming software in the past without incident. Heck I've even been forced to combine vendors NICs into teams (Broadcom internal NICs and Intel quad ports), and that works fine too. Worked for Microsoft Product Support, and I've built many a cluster. NIC teaming is supported on everything but the heartbeat NIC in an MSCS cluster. iSCSI isn't supported with NIC teaming, but that's irrelevant to MSCS, as it's not supported with non-clustered iSCSI clients, either. Use MPIO instead. The only "Cluster" NIC teaming isn't supported on are NLB arrays for public traffic.
Using teaming on the public or client networks is acceptable. However, if problems or issues seem to be related to teaming, Microsoft Product Support Services will require that teaming be disabled. If this resolves the problem or issue, you must seek assistance from the hardware manufacturer.http://support.microsoft.com/kb/254101/
HeroPsycho wrote: You can actually, but you must use the NIC bonding agent with your server, not do it within HyperV.