Nexus VPC with Vmware VDS

Does anyone have any experience setting up VPC LACP connection with vmware virtual distributed switch?
I have 2 Nexus switches and 4 esx hosts with 2 10G nics each.
I've connected 1 10G nic per host to each nexus switch. 4 connections per nexus switch.
I created the LAG group on the VDS and added each nic into the port channel.
On the nexus switch only one link on each is coming up, the others are suspend.
Here is the config on the nexus side.
SW1
interface Ethernet1/5
description NTNX1
switchport mode trunk
switchport trunk allowed vlan 106,116
channel-group 4 mode active
no shutdown
interface Ethernet1/6
description NTNX2
switchport mode trunk
switchport trunk allowed vlan 106,116
channel-group 4 mode active
no shutdown
interface Ethernet1/7
description NTNX3
switchport mode trunk
switchport trunk allowed vlan 106,116
channel-group 4 mode active
no shutdown
interface Ethernet1/8
description NTNX4
switchport mode trunk
switchport trunk allowed vlan 106,116
channel-group 4 mode active
no shutdown
interface port-channel4
speed 10000
description NTNX1 VPC
switchport mode trunk
switchport trunk allowed vlan 106,116
spanning-tree port type edge trunk
vpc 4
Switch 2
interface Ethernet1/5
description NTNX1
switchport mode trunk
switchport trunk allowed vlan 106,116
channel-group 4 mode active
no shutdown
interface Ethernet1/6
description NTNX2
switchport mode trunk
switchport trunk allowed vlan 106,116
channel-group 4 mode active
no shutdown
interface Ethernet1/7
description NTNX3
switchport mode trunk
switchport trunk allowed vlan 106,116
channel-group 4 mode active
no shutdown
interface Ethernet1/8
description NTNX4
switchport mode trunk
switchport trunk allowed vlan 106,116
channel-group 4 mode active
no shutdown
interface port-channel4
speed 10000
description NTNX1 VPC
switchport mode trunk
switchport trunk allowed vlan 106,116
spanning-tree port type edge trunk
vpc 4
and this is what I'm seeing with the sh port-channel summary
4 Po4(SU) Eth LACP Eth1/5(P) Eth1/6(s) Eth1/7(s)
Eth1/8(s)
Thanks in advance.
I have 2 Nexus switches and 4 esx hosts with 2 10G nics each.
I've connected 1 10G nic per host to each nexus switch. 4 connections per nexus switch.
I created the LAG group on the VDS and added each nic into the port channel.
On the nexus switch only one link on each is coming up, the others are suspend.
Here is the config on the nexus side.
SW1
interface Ethernet1/5
description NTNX1
switchport mode trunk
switchport trunk allowed vlan 106,116
channel-group 4 mode active
no shutdown
interface Ethernet1/6
description NTNX2
switchport mode trunk
switchport trunk allowed vlan 106,116
channel-group 4 mode active
no shutdown
interface Ethernet1/7
description NTNX3
switchport mode trunk
switchport trunk allowed vlan 106,116
channel-group 4 mode active
no shutdown
interface Ethernet1/8
description NTNX4
switchport mode trunk
switchport trunk allowed vlan 106,116
channel-group 4 mode active
no shutdown
interface port-channel4
speed 10000
description NTNX1 VPC
switchport mode trunk
switchport trunk allowed vlan 106,116
spanning-tree port type edge trunk
vpc 4
Switch 2
interface Ethernet1/5
description NTNX1
switchport mode trunk
switchport trunk allowed vlan 106,116
channel-group 4 mode active
no shutdown
interface Ethernet1/6
description NTNX2
switchport mode trunk
switchport trunk allowed vlan 106,116
channel-group 4 mode active
no shutdown
interface Ethernet1/7
description NTNX3
switchport mode trunk
switchport trunk allowed vlan 106,116
channel-group 4 mode active
no shutdown
interface Ethernet1/8
description NTNX4
switchport mode trunk
switchport trunk allowed vlan 106,116
channel-group 4 mode active
no shutdown
interface port-channel4
speed 10000
description NTNX1 VPC
switchport mode trunk
switchport trunk allowed vlan 106,116
spanning-tree port type edge trunk
vpc 4
and this is what I'm seeing with the sh port-channel summary
4 Po4(SU) Eth LACP Eth1/5(P) Eth1/6(s) Eth1/7(s)
Eth1/8(s)
Thanks in advance.
Comments
I setup the nexus VPC config first, then the VDS after, I'm not sure if the order matters.
Unfortunately I didn't setup the VDS, the Vmware guys did it, and I only know a little bit from a few doc's I've read.
It's a net new setup, so I may as him to delete the VDS and start over with me watching.
I'm confident my config on the nexus side was correct, I just didn't know if maybe it was a limitation on the VDS side of things.
There isn't a whole lot of documentation out there regarding Nexus VPC and VDS Lag groups.
show vpc consistency [global | interface] - make sure that your vpc is configured correctly
As networker050184 mentioned, LACP may be misconfigured on these ports...You can confirm it with the show lacp command.
Channel group is 4 port channel is Po4
PDUs sent: 185246
PDUs rcvd: 4221
Markers sent: 0
Markers rcvd: 0
Marker response sent: 0
Marker response rcvd: 0
Unknown packets rcvd: 0
Illegal packets rcvd: 0
Lag Id: [ [(7f9b, 0-23-4-ee-be-1, 8004, 8000, 107), (ffff, c-c4-7a-49-8c-ce, b, ff, 8002)] ]
Operational as aggregated link since Wed Dec 31 20:00:00 1969
Local Port: Eth1/7 MAC Address= 0-23-4-ee-be-1
System Identifier=0x8000,0-23-4-ee-be-1
Port Identifier=0x8000,0x107
Operational key=32772
LACP_Activity=active
LACP_Timeout=Long Timeout (30s)
Synchronization=NOT_IN_SYNC
Collecting=false
Distributing=false
Partner information refresh timeout=Long Timeout (90s)
Actor Admin State=(Ac-1:To-1:Ag-1:Sy-0:Co-0:Di-0:De-0:Ex-0)
Actor Oper State=(Ac-1:To-0:Ag-1:Sy-0:Co-0:Di-0:De-0:Ex-0)
Neighbor: 0x8002
MAC Address= c-c4-7a-49-8c-ce
System Identifier=0xffff, Port Identifier=0xff,0x8002
Operational key=11
LACP_Activity=unknown
LACP_Timeout=Long Timeout (30s)
Synchronization=NOT_IN_SYNC
Collecting=false
Distributing=false
Partner Admin State=(Ac-0:To-1:Ag-0:Sy-0:Co-0:Di-0:De-0:Ex-0)
Partner Oper State=(Ac-1:To-0:Ag-1:Sy-0:Co-0:Di-0:De-0:Ex-0)
Aggregate or Individual(True=1)= 2
Legend:
Type 1 : vPC will be suspended in case of mismatch
Name Type Local Value Peer Value
----
Shut Lan 1 No No
STP Port Type 1 Edge Trunk Port Edge Trunk Port
STP Port Guard 1 None None
STP MST Simulate PVST 1 Default Default
lag-id 1 [(7f9b, [(7f9b,
0-23-4-ee-be-1, 8004, 0-23-4-ee-be-1, 8004,
0, 0), (ffff, 0, 0), (ffff,
c-c4-7a-49-8d-e, b, 0, c-c4-7a-49-8d-e, b, 0,
0)] 0)]
mode 1 active active
Speed 1 10 Gb/s 10 Gb/s
Duplex 1 full full
Port Mode 1 trunk trunk
Native Vlan 1 1 1
MTU 1 1500 1500
Admin port mode 1
Switchport MAC Learn 2 Enable Enable
vPC card type 1 Empty Empty
Allowed VLANs - 106,116 106,116
Local suspended VLANs - - -
Do you see anything when issuing sh lacp neighbor interface port-channel xx?
A - Device is in Active mode P - Device is in Passive mode
port-channel4 neighbors
Partner's information
Partner Partner Partner
Port System ID Port Number Age Flags
Eth1/5 65535,c-c4-7a-49-8d-e 0x8002 145003 SA
LACP Partner Partner Partner
Port Priority Oper Key Port State
255 0xb 0x3d
Partner's information
Partner Partner Partner
Port System ID Port Number Age Flags
Eth1/6 65535,c-c4-7a-49-8d-34 0x8002 148616 SA
LACP Partner Partner Partner
Port Priority Oper Key Port State
255 0xb 0x5
Partner's information
Partner Partner Partner
Port System ID Port Number Age Flags
Eth1/7 65535,c-c4-7a-49-8c-ce 0x8002 0 SA
LACP Partner Partner Partner
Port Priority Oper Key Port State
255 0xb 0x5
Partner's information
Partner Partner Partner
Port System ID Port Number Age Flags
Eth1/8 65535,c-c4-7a-4c-b-5a 0x8002 0 SA
LACP Partner Partner Partner
Port Priority Oper Key Port State
255 0xb 0xd
Was able to work on this today.
If I create a VPC for each host(1 link going to each switch) and created a lag group for each on the VDS and they all came up no issues.
I tried to lower the bundle down to 4 links and only 2 came up and 2 were suspended as I was seeing when I was trying to do 8.
I took VPC out of the equation.
2 host plugged into the same switch. Both ports are members of the same port channel and both NICS on the esx side are members of the lag group.
I get the same result. One link up the other is suspended.
I'm not a vmware guy, but what version are you using? I know there was a limitation when creating LAGs with vmware dvs, but I don't recall what version solved this issue.
You can only bundle physical nics from the same host.
So in my case with 4 hosts with 2 nics each, I would have to create 4 separate lag groups, one per host.
I was thinking the Distributed Switch acted almost like VSS.
We ended up just making all 8 links individual trunks and leveraged VMware to teaming and failover.
Time to maybe tackle VCP:NV maybe haha
Oh boy, I missed this detail haha xD