VMware DVS

RTmarcRTmarc Senior MemberMember Posts: 1,082 ■■■□□□□□□□
Just curious. What are your thoughts on more switches - smaller port groups versus fewer switches - more port groups?

This stems from a conversation I had with a colleague.

Comments

  • MishraMishra MIPS processor please Member Posts: 2,468 ■■■■□□□□□□
    They are both a management nightmare when it comes to 50-100+ ESX servers in your environment. I would suggest using distributed switches unless you environment has a lot of difference between each vswitch.
    My blog http://www.calegp.com

    You may learn something!
  • RTmarcRTmarc Senior Member Member Posts: 1,082 ■■■□□□□□□□
    Oh, I'm talking about distributed switches. The question was on what other people's preferences are: more switches - less pgs or few switches - many pgs.
  • astorrsastorrs Drops by now and again Member Posts: 3,139 ■■■■■■□□□□
    I'm not sure you're thinking about this correctly.

    Build your vSwitches based on & to segregate the physical connections that are attached to that switch (since a pNIC can only be connected to one vSwitch). Build your portgroups to satisfy the logical divisions you need (VLANs, etc).

    Not sure what more there is to think about... maybe I'm missing something. The environment of ~50 hosts I've been deploying these last couple months has a single vSwitch (ESXi 3.5U4) per host with ~30 port groups (lots of VLANs in their environment) connected to redundant 10G pNICs.

    If we're talking about dvSwitches (vDS is a stupid acronym in my book) the same holds true - and don't forget you can only have 16 dvSwitches per vCenter instance.
  • MishraMishra MIPS processor please Member Posts: 2,468 ■■■■□□□□□□
    Are most 10gig nics you can buy supported by ESX3.5u4? I know this is off topic, sorry...
    My blog http://www.calegp.com

    You may learn something!
  • astorrsastorrs Drops by now and again Member Posts: 3,139 ■■■■■■□□□□
    Mishra wrote: »
    Are most 10gig nics you can buy supported by ESX3.5u4? I know this is off topic, sorry...
    All the common ones used by the server vendors:

    • Broadcom NetXtreme II 57710 10Gigabit Ethernet
    • Broadcom NetXtreme II 57711E 10Gigabit Ethernet
    • Cisco UCS 82598KR-CI 10-Gigabit Ethernet
    • HP NC510C PCIe 10 Gigabit Server Adapter
    • Intel 10 Gigabit AF DA Dual Port Server Adapter
    • Intel 10 Gigabit AT Server Adapter
    • Intel 10 Gigabit SR Dual Port Express Module
    • Intel 10 Gigabit XF LR Server Adapter
    • Intel 10 Gigabit XF SR Dual Port Server Adapter
    • Intel 10 Gigabit XF SR Server Adapter
    • Intel 82598EB 10 Gigabit AT CX4 Network Connection
    • NetXen NXB-10GCX4
    • NetXen NXB-10GXSR
    • NetXen NXB-10GXxR
    • ServerEngines PRIMERGY BX600 10GbE I/O Module
    • Sun Blade 6000 10GbE Multi-Fabric Network Express Module
    • Sun Multithreaded 10GbE Networking Card X1027A-z
    • Sun Multithreaded 10GbE Networking Card X1028A-z
    • plus a bunch of CNA's (Gen-2 cards require you to add the drivers with esxupdate though)
    (we're using Broadcom)
  • jibbajabbajibbajabba Google Ninja Member Posts: 4,317 ■■■■■■■■□□
    astorrs wrote: »
    I'm not sure you're thinking about this correctly.

    Build your vSwitches based on & to segregate the physical connections that are attached to that switch (since a pNIC can only be connected to one vSwitch). Build your portgroups to satisfy the logical divisions you need (VLANs, etc).

    Not sure what more there is to think about... maybe I'm missing something. The environment of ~50 hosts I've been deploying these last couple months has a single vSwitch (ESXi 3.5U4) per host with ~30 port groups (lots of VLANs in their environment) connected to redundant 10G pNICs.

    If we're talking about dvSwitches (vDS is a stupid acronym in my book) the same holds true - and don't forget you can only have 16 dvSwitches per vCenter instance.

    Yepp, that :)

    I think most of the cluster I have installed normally have a maximum of two.

    One virtual switch connected to two physical NIC used for the service console and vmkernel using 10GbE
    One virtual switch connected to two physical NIC used for traffic with a portgroup for each individual VLAN using GbE.

    In general we keep it like for like. For each physical switch required (different networks / speed etc.) we use one virtual switch.
    My own knowledge base made public: http://open902.com :p
Sign In or Register to comment.