lsud00d wrote: » For anything mission critical you definitely want to team/interlace across physical NIC's AND switches. I've done exactly what you described (R720's w/3 quad-port gb NIC's) and as long as you team across them (I LACP'd between server and switch) you reduce your SPOF's (single points of failure) at yet another step along the way. Given it was a Hyper-V infrastructure, the network segmentation was for Management, Storage, and Live Migration.
darkerosxx wrote: » 10Gig links and call it a day.
dave330i wrote: » Most enterprises are using 10GB.
kj0 wrote: » Unless your pushing a lot of data, 4 x 10GB would be fine. 2 x 10GB onboard = 1 x Storage, 1 x = All other 2 x 10GB PCI card = Failover (As Above) From your question, a single 10Gb would be more than enough. However, you should always separate your storage and also have a failover set up - no single point of failure.
joelsfood wrote: » Minimum setup is 2x1gb for iscsi, 2x1gb for management/VM networks. Separate onboard and pci-e if possible. You're definitely on the right track. Of course, I tend to go with the same 10gb VIC1240 or MR81KR on all of mine, as that's what the blades come with.
lsud00d wrote: » This is a very broad stroke in the context and direction of the thread--can you expound?
dave330i wrote: » All of my enterprise level customers are using 10GB. Many of my mid size customers are using 10GB. Places I see 1GB are branch offices or small shops.
joelsfood wrote: » I'm about 50/50 on my small/medium clients being 10g. Small clients that colo tend to be 10g now. Small clients that are internal only are often still 1g.
Deathmage wrote: » ...For clarifications, are we talking 10GB for the storage fabric (that makes sense to me) but does that still apply to normal LAN fabric, aka 'Production' Traffic.