vDS Dynamic Configuration

dave330idave330i Member Posts: 2,091 ■■■■■■■■■■
Anyone read VMware vSphere Distributed Switch Best Practices regarding vDS dynamic configuration? Interesting read. One of the things it got me thinking about was MTU.

For optimal iSCSI traffic MTU should be set at 9000. Since single vDS is deployed, all network traffic will be set at 9000 if you change the MTU on vDS. You could split iSCSI traffic onto a 2nd vDS, but then you're not taking full advantage of dynamic configuration.

Anyone have any opinion and/or experience on this matter they'd care to share?
2018 Certification Goals: Maybe VMware Sales Cert
"Simplify, then add lightness" -Colin Chapman

Comments

  • blargoeblargoe Member Posts: 4,174 ■■■■■■■■■□
    I don't have a lot of experience with running iSCSI on vDS, but I'm wondering why you'd need the 2nd vDS... Jumbo Frames has to be enabled end to end, so each host's iSCSI vmk port and the physical switch all have to have the higher MTU set for it to take effect. The individual VM's would use whatever MTU their OS network adapter settings is set to (whatever the default is)... right? Am I missing something?

    -b
    IT guy since 12/00

    Recent: 11/2019 - RHCSA (RHEL 7); 2/2019 - Updated VCP to 6.5 (just a few days before VMware discontinued the re-cert policy...)
    Working on: RHCE/Ansible
    Future: Probably continued Red Hat Immersion, Possibly VCAP Design, or maybe a completely different path. Depends on job demands...
  • dave330idave330i Member Posts: 2,091 ■■■■■■■■■■
    I thought I read that all MTU mismatch is bad, but the network guy is telling me 1500 going through 9000 isn't a big deal, so this is a non issue.
    2018 Certification Goals: Maybe VMware Sales Cert
    "Simplify, then add lightness" -Colin Chapman
  • meadITmeadIT Member Posts: 581 ■■■■□□□□□□
    Yeah, the real issue is when you have the two endpoints configured for jumboframes, but something in the middle won't pass them. I've seen some weird behavior in situations like this. You'll be able to see the device, and sometimes the datastores, but you won't be able to browse the datastores and the VMs will be unresponsive.
    CERTS: VCDX #110 / VCAP-DCA #500 (v5 & 4) / VCAP-DCD #10(v5 & 4) / VCP 5 & 4 / EMCISA / MCSE 2003 / MCTS: Vista / CCNA / CCENT / Security+ / Network+ / Project+ / CIW Database Design Specialist, Professional, Associate
  • jibbajabbajibbajabba Member Posts: 4,317 ■■■■■■■■□□
    We had similar facepalm moments where the switch of the iSCSI switch was configured with an MTU of 9000 and the switch of the vSphere hosts too BUT we forgot the core switches. Was easy to miss as both ESX and SAN saw the correct MTU but the bit in the middle was missing it :) The really bad thing is that Cisco switches need a reload when you change the MTU so being a core switch made it a slightly more difficult task.
    My own knowledge base made public: http://open902.com :p
  • meadITmeadIT Member Posts: 581 ■■■■□□□□□□
    jibbajabba wrote: »
    The really bad thing is that Cisco switches need a reload when you change the MTU so being a core switch made it a slightly more difficult task.

    +1 This really sucks.
    CERTS: VCDX #110 / VCAP-DCA #500 (v5 & 4) / VCAP-DCD #10(v5 & 4) / VCP 5 & 4 / EMCISA / MCSE 2003 / MCTS: Vista / CCNA / CCENT / Security+ / Network+ / Project+ / CIW Database Design Specialist, Professional, Associate
Sign In or Register to comment.