Bonding (5) 1G's vs a solo 10G

DeathmageDeathmage Banned Posts: 2,496
Hey guys,

So now that I recently got my port-channel working correctly after months of thinking I set something up wrong when in-fact it was a Dell firmware that was needed after months of working with Dell on the matter (I thought I was doing networking wrong or something, was blowing up my home-lab trying to figure it out - only for them to tell me in the end they made a firmware update for my exact problem, they basically were stalling!!!!) I'm now pondering the question if a solo 10G convergence Nic would better than my current (5) gigabit port-channel from each ESXi host to the L2 switching layer.

There is all the rav about 10G convergence even from the CompTIA Storage+ study and exam but in my mind even at 1GB if I had say 5 or more connections bonded together the balanced load across multiple ports would still be faster than pushing it out one port. Now on the flip side I have (4) Quad cards per server and the load on the port channels is split across all 4 of them on-top of the onboard Quad, I didn't want a SPOF.

As of this morning nearly every connection into the VMware cluster has increased in performance by a factor of 5, all these months I was bleeding performance on the cluster thinking I needed to tune the snot out of it on the premise the networking side was fine. Now that it's been confirmed it was networking everything else is blazing fast now. I did find out how to tune other parts but now I feel better knowing it wasn't me!

My only thinking of this does the same concept of balancing the load across 5 nic ports equate to better performance even at 10G over one port?

Comments

  • iBrokeITiBrokeIT Member Posts: 1,318 ■■■■■■■■■□
    Deathmage wrote: »
    My only thinking of this does the same concept of balancing the load across 5 nic ports equate to better performance even at 10G over one port?

    I got half way through typing up a post explaining this but then I remembered Jason Nash has a great series on Pluralsight called VMware vSphere Optimize & Scale specifically Storage & Networking that you should probably watch since I know you have a sub and he can explain it better. :)

    "You aren't going to get more than 1 GB throughput with 1 GB links in load balancing with IP Hash" which is why 10 GB is the way to go if you need more bandwidth.
    2019: GPEN | GCFE | GXPN | GICSP | CySA+ 
    2020: GCIP | GCIA 
    2021: GRID | GDSA | Pentest+ 
    2022: GMON | GDAT
    2023: GREM  | GSE | GCFA

    WGU BS IT-NA | SANS Grad Cert: PT&EH | SANS Grad Cert: ICS Security | SANS Grad Cert: Cyber Defense Ops SANS Grad Cert: Incident Response
  • DeathmageDeathmage Banned Posts: 2,496
    Koodos, will defiantly give that a gander this evening.

    Dealing with a users PC that opened a funky email and it had a virus on it, just as our replacement AV program is on order too..... Symantec EP truly s*cks!
  • DeathmageDeathmage Banned Posts: 2,496
    Gee look at this, a properly working port-channel does wonders for overall cluster performance. I got these babies purring so well these servers aren't even making the I/O twitch... icon_wink.gif



    icon_bounce.gif



    icon_cheers.gificon_cheers.gificon_cheers.gif
Sign In or Register to comment.