E1000 vs VMXNET3

DeathmageDeathmage Banned Posts: 2,496
Hey guys,

So I remember from my VCP study that these two NIC drivers both have a benefit and a con over the other. However I've been using the E1000 for our SQL/File servers but I've been reading online that the VMXNET3 driver may be a better choice for high IOPS-based VM's cause E1000 limit the throughput over the NIC because of software emulation of the Intel driver and the VMXNET3 driver is made by VMware and can integrate better with a VM than the Intel variance.

I'm always looking to improve the performance of the cluster, so I'm always thinking what-if scenarios.

I found this brief blog on the topic: VMXNET3 vs E1000E and E1000 – part 1 | Rickard Nobel

I figured I'd bounce this thought off everyone's head; I'd like to hear your takes on this topic.

Comments

  • Architect192Architect192 Member Posts: 157 ■■■□□□□□□□
    Hey Trevor... Pretty childish of you to block me on LinkedIn for disagreeing with your blog posts. Glad to see you've opened up your mind to looking at better options.

    The VMXNET3 has always been the preferred choice for virtualized workloads unless there are compatibility issues. The blog post you linked shows exactly what I was referring to yesterday.

    You get 10GB bandwidth between VMs on the same hosts / across hosts (if the hardware layer support it of course), lower overhead on the hypervisor, etc. You can read VMware's take on it here:

    VMware KB: Choosing a network adapter for your virtual machine
    Current: VCAP-DCA/DCD, VCP-DCV2/3/4/5, VCP-NV 6 - CCNP, CCNA Security - MCSE: Server Infrastructure 2012 - ITIL v3 - A+ - Security+
    Working on: CCNA Datacenter (2nd exam), Renewing VMware certs...
  • DeathmageDeathmage Banned Posts: 2,496
    Hey Trevor... Pretty childish of you to block me on LinkedIn for disagreeing with your blog posts. Glad to see you've opened up your mind to looking at better options.

    The VMXNET3 has always been the preferred choice for virtualized workloads unless there are compatibility issues. The blog post you linked shows exactly what I was referring to yesterday.

    You get 10GB bandwidth between VMs on the same hosts / across hosts (if the hardware layer support it of course), lower overhead on the hypervisor, etc. You can read VMware's take on it here:

    VMware KB: Choosing a network adapter for your virtual machine

    thought you were some completely random person. I had no idea who it was, I sometimes have way too many people on there to know who everyone is. I sometimes get some pretty interesting people on there that message on my post and then ask for shady things. I'll unblock you now that I know who it was icon_wink.gif

    Just in the future please just make it one post, my phone got blown up with the 5+ posts to that post in 5 minutes. It was super annoying having the ding on my phone going off in a meeting, I had no problem with your posts just the frequency of them...I pressed block cause my boss was giving me a dirty look...
  • Architect192Architect192 Member Posts: 157 ■■■□□□□□□□
    Deathmage wrote: »
    thought you were some completely random person. I had no idea who it was, I sometimes have way too many people on there to know who everyone is. I sometimes get some pretty interesting people on there that message on my post and then ask for shady things. I'll unblock you now that I know who it was icon_wink.gif

    Just in the future please just make it one post, my phone got blown up with the 5+ posts to that post in 5 minutes. It was super annoying having the ding on my phone going off in a meeting...I pressed block cause my boss was giving me a dirty look...

    LinkedIn wouldn't let me (post too long) so I had to make multiple posts. You work / have meetings on saturdays?
    Current: VCAP-DCA/DCD, VCP-DCV2/3/4/5, VCP-NV 6 - CCNP, CCNA Security - MCSE: Server Infrastructure 2012 - ITIL v3 - A+ - Security+
    Working on: CCNA Datacenter (2nd exam), Renewing VMware certs...
  • DeathmageDeathmage Banned Posts: 2,496
    LinkedIn wouldn't let me (post too long) so I had to make multiple posts. You work / have meetings on saturdays?

    Indeed, been doing a Network Upgrade overhaul. going from a flat 192.0.0.0/24 to a solo /23 in two 255 blocks from static/dynamic with 8 /27's for departmental security on our new (6) switch N3048 super-stack. So the weekend with everyone home is the ideal time. I was on a conference call yesterday when my phone was blowing up from the posts. :)
  • DeathmageDeathmage Banned Posts: 2,496
    Hey Trevor... Pretty childish of you to block me on LinkedIn for disagreeing with your blog posts. Glad to see you've opened up your mind to looking at better options.

    The VMXNET3 has always been the preferred choice for virtualized workloads unless there are compatibility issues. The blog post you linked shows exactly what I was referring to yesterday.

    You get 10GB bandwidth between VMs on the same hosts / across hosts (if the hardware layer support it of course), lower overhead on the hypervisor, etc. You can read VMware's take on it here:

    VMware KB: Choosing a network adapter for your virtual machine

    BTW, thanks for the information. I'm really learn more past my VCP studies on the job than I did taking the test. So applying book learning to actually learning is two different things.
  • Architect192Architect192 Member Posts: 157 ■■■□□□□□□□
    Deathmage wrote: »
    BTW, thanks for the information. I'm really learn more past my VCP studies on the job than I did taking the test. So applying book learning to actually learning is two different things.

    Indeed. I learn stuff all the time, and I adapt my "best practices" regularly based on new findings/suggestions/recommendations. There is no single way of doing things, but there are definitely ways of how NOT to do things :)
    Current: VCAP-DCA/DCD, VCP-DCV2/3/4/5, VCP-NV 6 - CCNP, CCNA Security - MCSE: Server Infrastructure 2012 - ITIL v3 - A+ - Security+
    Working on: CCNA Datacenter (2nd exam), Renewing VMware certs...
  • DeathmageDeathmage Banned Posts: 2,496
    Indeed. I learn stuff all the time, and I adapt my "best practices" regularly based on new findings/suggestions/recommendations. There is no single way of doing things, but there are definitely ways of how NOT to do things :)

    I concur, I've just never run into an issue for instance deleting the softwaredistribution folder after applying windows updates. Those files could sit there for months and do nothing, so why keep them there. I've always removed them and they never caused a issue besides freeing up space. The uninstaller folder like you said can be removed fine and is normally the other folder to be empied monthly.
  • kj0kj0 Member Posts: 767
    VMXNET3 is the automatic choice for me unless there is some real good reason (3.5 VMs migrated to 5.1, for example)

    I even change my vmx file for my VMware Workstation to use VMXNET3.

    Chueck out Mike Websters post on Long White Clouds. He does some very good write ups on e1000/VMXNET3 and SQL etc.
    2017 Goals: VCP6-DCV | VCIX
    Blog: https://readysetvirtual.wordpress.com
  • kj0kj0 Member Posts: 767
    A few hours later, and here you go. Have a read here, but also check out some of Mike's SQL threads as well. And, if you need to hit Michael up on Twitter.

    VMware vSphere 5.5 Virtual Network Adapter Performance | Long White Virtual Clouds
    2017 Goals: VCP6-DCV | VCIX
    Blog: https://readysetvirtual.wordpress.com
  • DeathmageDeathmage Banned Posts: 2,496
    That's quite the interesting article.

    I'm curious do you guys deploy your servers on there own vlan and then enable jumbo frames on that vlan so the inter-vSwitch traffic and normal inter-physical switch traffic is faster for servers that communicate to each other and then to the end users that are on the same switching fabric but on a different vlan moving at the normal 1500 MTU.

    I mean it seems interesting, considering most of the servers at my job that are business critical sit on one host so they stay local to the vSwitch ie: SQL, Application server (ERP tied in SQL), and file server and only pass the traffic that has been processed out to the physical network.
  • EssendonEssendon Member Posts: 4,546 ■■■■■■■■■■
    Jumbo frames are a pain in the neck, both to implement and troubleshoot. They need to be enabled end-to-end and everything in between. Not worth the effort in my opinion for minimal gains. The only traffic that may perhaps gain from jumbo frames is vMotion traffic, but then you've got multi-NIC vMotion which yields better results. If there's a mismatch in frame size anywhere along the line, undesirable things happen. I had them going at one of my employers for backups traffic. Kinda worked but only at a size of 1514 MTU, any other size would make things worse than without them.

    Stay away from jumbo frames.
    NSX, NSX, more NSX..

    Blog >> http://virtual10.com
  • dave330idave330i Member Posts: 2,091 ■■■■■■■■■■
    Essendon wrote: »
    Jumbo frames are a pain in the neck, both to implement and troubleshoot. They need to be enabled end-to-end and everything in between. Not worth the effort in my opinion for minimal gains. The only traffic that may perhaps gain from jumbo frames is vMotion traffic, but then you've got multi-NIC vMotion which yields better results. If there's a mismatch in frame size anywhere along the line, undesirable things happen. I had them going at one of my employers for backups traffic. Kinda worked but only at a size of 1514 MTU, any other size would make things worse than without them.

    Stay away from jumbo frames.

    In a 10Gb, jumbo is worth while, but not worth the effort on the VM traffic side. IP storage and vMotion will benefit from jumbo.

    Basically, 1Gb, don't bother. 10Gb, enable on vMotion & IP storage (they should be on the same physical switches anyway).

    Of course NSX changes this somewhat.
    2018 Certification Goals: Maybe VMware Sales Cert
    "Simplify, then add lightness" -Colin Chapman
  • DeathmageDeathmage Banned Posts: 2,496
    Well I use Jumbo on IP Storage vLAN but not on vMotion - just a solo 1G for vMotion w/ a failover per host. But I guess this answers my question on Jumbo for servers on their own /27 subnet that's isolated to just them with IP routing to the primary /23 subnet for Production traffic. Figured it was worth asking before doing.

    I mean all my IO laden server sit on the same host so they 'should' stay local to the vSwitch - never leaving the host and in that sense, after the above reading, it seems logical to use the VMXNET3 NIC's since the internal vSwitch traffic would be 10G. Right now the File and SQL servers use the VMXNET3 NIC's and the Application server uses the E1000 so I'd be interested to see what it would do to make the application server use the VMXNET3 NIC instead.

    Just for clarification the VMXNET3 NIC's traverse the vSwitch at 10G by default, no special triggers? - prior to the above reading I don't recall from my studies the XNET3's doing 10G internally, so this is enlightening. To me if I understand the the logic on XNET3, it will appear as 10G in the VM and will connect to the vSwitch at 10G but will leave the vSwitch out the physical NIC at 1G but it would be beneficial for other VM's on the same host that use the same vSwitch since they will all 'use' the 10G speed internally, is this accurate?
  • jibbajabbajibbajabba Member Posts: 4,317 ■■■■■■■■□□
    Jumbo frames can indeed be a pain .. Only after I made sure that all my hardware in my lab supports it, I realized that Nas4Free's NIC driver doesn't even support it on my NAS ... well there went the jumbo idea :)
    My own knowledge base made public: http://open902.com :p
  • Architect192Architect192 Member Posts: 157 ■■■□□□□□□□
    Deathmage wrote: »
    Well I use Jumbo on IP Storage vLAN but not on vMotion - just a solo 1G for vMotion w/ a failover per host. But I guess this answers my question on Jumbo for servers on their own /27 subnet that's isolated to just them with IP routing to the primary /23 subnet for Production traffic. Figured it was worth asking before doing.

    I mean all my IO laden server sit on the same host so they 'should' stay local to the vSwitch - never leaving the host and in that sense, after the above reading, it seems logical to use the VMXNET3 NIC's since the internal vSwitch traffic would be 10G. Right now the File and SQL servers use the VMXNET3 NIC's and the Application server uses the E1000 so I'd be interested to see what it would do to make the application server use the VMXNET3 NIC instead.

    Just for clarification the VMXNET3 NIC's traverse the vSwitch at 10G by default, no special triggers? - prior to the above reading I don't recall from my studies the XNET3's doing 10G internally, so this is enlightening. To me if I understand the the logic on XNET3, it will appear as 10G in the VM and will connect to the vSwitch at 10G but will leave the vSwitch out the physical NIC at 1G but it would be beneficial for other VM's on the same host that use the same vSwitch since they will all 'use' the 10G speed internally, is this accurate?

    Indeed. The VMs see a 10Gb NIC and will try to push data as fast as it can. It will be throttled back by the physical NIC if it's sub 10Gb of course.

    As for using Jumbo frames, this is enabled at the switch level, not per VLAN. If your switch has the feature, you need to enable it, and then you have the choice of setting your MTU at the size you want/need (up to the max of 9000). If you have more than one switch then they all need to be configured to support jumbo frames. It can be challenging in a datacenter to enable this if it wasn't already.
    Current: VCAP-DCA/DCD, VCP-DCV2/3/4/5, VCP-NV 6 - CCNP, CCNA Security - MCSE: Server Infrastructure 2012 - ITIL v3 - A+ - Security+
    Working on: CCNA Datacenter (2nd exam), Renewing VMware certs...
Sign In or Register to comment.