Design Question: 3 or 4 nics per service or more?

DeathmageDeathmage Banned Posts: 2,496
Hey guys,

So I'm starting to think about a database design and I'm thinking of doing a 3 uplink design per say svMotion, vMotion, Production (this one would be super critical of this design), and Management well 2 uplink connection would be fine. See I'm thinking that having a load balanced uplink for a host that has a database so that why I'm leaning towards 3 per, one being a spare. Obviously I'm thinking of have 3 hosts, one host being a unused HA host in-case server 1 or 2 drop.

But my question for some of you guys that use VMware in larger scale operations with mission critical databases, do you use more than 3 nics like say 4 or 5? - my thought is to get say Dell R720's as an example and get two riser 4 port gigabit cards and spread the uplinks across all three cards (third being the factory 4 ports) thus providing more redundancy in the event of a whole riser sh*tin' the brick and from performance standpoint the throughput across 3 cards would be way faster than having them all on one nic (as an example even though thats dumb, IDC if it's a enterprise-grade server).

Just thought I'd chime you guys and see what you guys do.

Comments

  • lsud00dlsud00d Member Posts: 1,571
    For anything mission critical you definitely want to team/interlace across physical NIC's AND switches.

    I've done exactly what you described (R720's w/3 quad-port gb NIC's) and as long as you team across them (I LACP'd between server and switch) you reduce your SPOF's (single points of failure) at yet another step along the way. Given it was a Hyper-V infrastructure, the network segmentation was for Management, Storage, and Live Migration.
  • darkerosxxdarkerosxx Banned Posts: 1,343
    10Gig links and call it a day.
  • DeathmageDeathmage Banned Posts: 2,496
    lsud00d wrote: »
    For anything mission critical you definitely want to team/interlace across physical NIC's AND switches.

    I've done exactly what you described (R720's w/3 quad-port gb NIC's) and as long as you team across them (I LACP'd between server and switch) you reduce your SPOF's (single points of failure) at yet another step along the way. Given it was a Hyper-V infrastructure, the network segmentation was for Management, Storage, and Live Migration.

    Kool beans, I'm also thinking of 6 uplinks now for production and iSCSI, totally forgot that I'd be running a redundant switch array. Now I could do 3 uplinks if it was a 10G fabric for storage.

    This is sort of a plan for the upcoming build-out, they use all Dell servers so I'm really going to need to run the dpack and see what the IOPS are on those database servers to better determine design.

    I think from a production LAN standpoint gigabit will be fine in a bonded connection with a standby spare; it's really storage that I'm going to build out a two switch design in a heartbeat config. I love Equalogic's and those SAN's have 4 ports per controller so two per switch is what I'm leaning towards with a 10G backplane (but that would incite I'm using it as a load-balanced switching array, which has another benefit too; so many choices, lol)
    darkerosxx wrote: »
    10Gig links and call it a day.

    thought crossed my mind...
  • lsud00dlsud00d Member Posts: 1,571
    darkerosxx wrote: »
    10Gig links and call it a day.

    darkero$$$x* icon_wink.gif
  • dave330idave330i Member Posts: 2,091 ■■■■■■■■■■
    Most enterprises are using 10GB.
    2018 Certification Goals: Maybe VMware Sales Cert
    "Simplify, then add lightness" -Colin Chapman
  • kj0kj0 Member Posts: 767
    Unless your pushing a lot of data, 4 x 10GB would be fine.

    2 x 10GB onboard = 1 x Storage, 1 x = All other
    2 x 10GB PCI card = Failover (As Above)

    From your question, a single 10Gb would be more than enough. However, you should always separate your storage and also have a failover set up - no single point of failure.
    2017 Goals: VCP6-DCV | VCIX
    Blog: https://readysetvirtual.wordpress.com
  • joelsfoodjoelsfood Member Posts: 1,027 ■■■■■■□□□□
    Minimum setup is 2x1gb for iscsi, 2x1gb for management/VM networks. Separate onboard and pci-e if possible. You're definitely on the right track.

    Of course, I tend to go with the same 10gb VIC1240 or MR81KR on all of mine, as that's what the blades come with. :)
  • DeathmageDeathmage Banned Posts: 2,496
    dave330i wrote: »
    Most enterprises are using 10GB.
    kj0 wrote: »
    Unless your pushing a lot of data, 4 x 10GB would be fine.

    2 x 10GB onboard = 1 x Storage, 1 x = All other
    2 x 10GB PCI card = Failover (As Above)

    From your question, a single 10Gb would be more than enough. However, you should always separate your storage and also have a failover set up - no single point of failure.
    joelsfood wrote: »
    Minimum setup is 2x1gb for iscsi, 2x1gb for management/VM networks. Separate onboard and pci-e if possible. You're definitely on the right track.

    Of course, I tend to go with the same 10gb VIC1240 or MR81KR on all of mine, as that's what the blades come with.

    Well mind as-well bleed my 'honeymoon' stage for all it's worth, 10G is going to be pricy, but it does sustain growth.

    Thanks guys for the feedback, much appreciated! icon_thumright.gif
  • lsud00dlsud00d Member Posts: 1,571
    dave330i wrote: »
    Most enterprises are using 10GB.

    This is a very broad stroke in the context and direction of the thread--can you expound?
  • dave330idave330i Member Posts: 2,091 ■■■■■■■■■■
    lsud00d wrote: »
    This is a very broad stroke in the context and direction of the thread--can you expound?

    All of my enterprise level customers are using 10GB. Many of my mid size customers are using 10GB. Places I see 1GB are branch offices or small shops.
    2018 Certification Goals: Maybe VMware Sales Cert
    "Simplify, then add lightness" -Colin Chapman
  • DeathmageDeathmage Banned Posts: 2,496
    dave330i wrote: »
    All of my enterprise level customers are using 10GB. Many of my mid size customers are using 10GB. Places I see 1GB are branch offices or small shops.

    ...For clarifications, are we talking 10GB for the storage fabric (that makes sense to me) but does that still apply to normal LAN fabric, aka 'Production' Traffic.
  • joelsfoodjoelsfood Member Posts: 1,027 ■■■■■■□□□□
    I'm about 50/50 on my small/medium clients being 10g. Small clients that colo tend to be 10g now. Small clients that are internal only are often still 1g.
  • DeathmageDeathmage Banned Posts: 2,496
    joelsfood wrote: »
    I'm about 50/50 on my small/medium clients being 10g. Small clients that colo tend to be 10g now. Small clients that are internal only are often still 1g.


    See my only problem with going from say a 1G core to a 10G core is if say for instance your having contention of network resources being it bandwidth, line degradation, duplex issues, broadcast/multicast storms, etc; then going from 1G to 10G would 'fix' the problems but only artificially and only until the pipe is saturated again..

    I see so many people in small/medium sized business just upgrading from 1G to 10G thinking it will solve problems, half the time I put my foot in my mouth because the customer needs to learn sometimes on there own behalf only to find the throughput wasn't the cause of the problem...

    that's my only beef with going from 1G to 10G...
  • dave330idave330i Member Posts: 2,091 ■■■■■■■■■■
    Deathmage wrote: »
    ...For clarifications, are we talking 10GB for the storage fabric (that makes sense to me) but does that still apply to normal LAN fabric, aka 'Production' Traffic.

    Storage are usually 8GB FC or 10GB iSCSI/NFS. LAN are 10GB for the servers.
    2018 Certification Goals: Maybe VMware Sales Cert
    "Simplify, then add lightness" -Colin Chapman
Sign In or Register to comment.