Core Switches - Redundancy / Load Balancing

mzinzmzinz Member Posts: 328
I'm getting 2 new switches for the core of our network.

Most likely getting 3560s. They will be running L3 and routing traffic between VLANs.

I plan on using VRRP for redundancy. How can I accomplish some load balancing?
_______LAB________
2x 2950
2x 3550
2x 2650XM
2x 3640
1x 2801

Comments

  • networker050184networker050184 Mod Posts: 11,962 Mod
    What kind of load balancing are you looking for?
    An expert is a man who has made all the mistakes which can be made.
  • mzinzmzinz Member Posts: 328
    What kind of load balancing are you looking for?

    Per-VLAN would be fine, if its possible. I think I was confused on how this would work because with HSRP, all devices would be sharing the same default gateway, looking like one device.

    Would it be possible to configure two instances of HSRP where half of the VLAN's point to one IP, and the other half point to another IP? Where each switch has one virtual IP?
    _______LAB________
    2x 2950
    2x 3550
    2x 2650XM
    2x 3640
    1x 2801
  • ColbyGColbyG Member Posts: 1,264
    These are your core switches? Why are your core switches the DGs? Is it like a collapsed core design?

    Why are you worried about load balancing across them? I wouldn't concern myself with it all that much. Just alternate which switch is the active gateway on each VLAN.
  • networker050184networker050184 Mod Posts: 11,962 Mod
    ColbyG wrote: »
    Why are you worried about load balancing across them? I wouldn't concern myself with it all that much. Just alternate which switch is the active gateway on each VLAN.


    Same here. We usually just make one switch the primary for all VLANs and has the lowest IGP cost. The other switch acts as backup only. The more predictable the traffic the easier it is to troubleshoot.
    An expert is a man who has made all the mistakes which can be made.
  • mzinzmzinz Member Posts: 328
    ColbyG wrote: »
    These are your core switches? Why are your core switches the DGs? Is it like a collapsed core design?

    Why are you worried about load balancing across them? I wouldn't concern myself with it all that much. Just alternate which switch is the active gateway on each VLAN.

    Yes, we are doing collapsed core. Small infrastructure.
    _______LAB________
    2x 2950
    2x 3550
    2x 2650XM
    2x 3640
    1x 2801
  • mzinzmzinz Member Posts: 328
    Same here. We usually just make one switch the primary for all VLANs and has the lowest IGP cost. The other switch acts as backup only. The more predictable the traffic the easier it is to troubleshoot.

    I'll take the advice. The amount of traffic going through will be very low, considering what a 3750 will be able to handle. I will do HSRP active/standby.
    _______LAB________
    2x 2950
    2x 3550
    2x 2650XM
    2x 3640
    1x 2801
  • jason_lundejason_lunde Member Posts: 567
    For a collapsed core I love a 3750 stack. You can dual home your access layer switches to the different stack members, and bundle them into etherchannels. You can then do some load balancing by analyzing your traffic and adjusting the etherchannel load-balancing algorithm. It's worked pretty good for me in the past, and a switch failure in the stack wont bring down your access switches still.
  • mzinzmzinz Member Posts: 328
    For a collapsed core I love a 3750 stack. You can dual home your access layer switches to the different stack members, and bundle them into etherchannels. You can then do some load balancing by analyzing your traffic and adjusting the etherchannel load-balancing algorithm. It's worked pretty good for me in the past, and a switch failure in the stack wont bring down your access switches still.

    How do you setup the dual homing? I was under the impression that stacking the 3750s appeared as a single logical switch to other devices.

    Could you provide a config snippet?
    _______LAB________
    2x 2950
    2x 3550
    2x 2650XM
    2x 3640
    1x 2801
  • jason_lundejason_lunde Member Posts: 567
    mzinz wrote: »
    How do you setup the dual homing? I was under the impression that stacking the 3750s appeared as a single logical switch to other devices.

    Could you provide a config snippet?

    Just take two ports from your access layer switch and attach one to switch1 in the stack, and the other line to switch2 in the stack. Bundle both sides into an etherchannel and you get a multi-chassis etherchannel. Its just a trunk....but gives you a bit of resiliency at the core. Match that with a decent etherchannel load balancing algorithm for your load balancing requirement. One of the best things is no STP to worry with b/t access and core layers.
  • mzinzmzinz Member Posts: 328
    Just take two ports from your access layer switch and attach one to switch1 in the stack, and the other line to switch2 in the stack. Bundle both sides into an etherchannel and you get a multi-chassis etherchannel. Its just a trunk....but gives you a bit of resiliency at the core. Match that with a decent etherchannel load balancing algorithm for your load balancing requirement. One of the best things is no STP to worry with b/t access and core layers.

    Wow, this is really cool.

    Originally we were planning on getting a couple 3560s at the core and having a 2 port etherchannel from each access switch to each core switch. I would have been running HSRP, so active/standby. It seems like the 3750 set up would be just as resilient, but use half as many total ports since you would only need one connection to each switch, yet still accomplish 2gbps... Am I right?

    Thanks for the help, I'm new to stacking.
    _______LAB________
    2x 2950
    2x 3550
    2x 2650XM
    2x 3640
    1x 2801
  • ColbyGColbyG Member Posts: 1,264
    mzinz wrote: »
    you would only need one connection to each switch, yet still accomplish 2gbps... Am I right?

    Yep, you're right. A stack acts like a chassis switch (4500, 6500) where each switch in the stack is like a blade in a chassis.
  • GT-RobGT-Rob Member Posts: 1,090
    You can do some makeshift load balancing by running two instances of HSRP on each VLAN, then pointing certain "parts" of your network to each.


    For example...

    vlan 10, switch one is primary for the VIP 10.0.0.1 and switch two is primary for VIP 10.0.0.2, then you have some clients set as 10.0.0.1 as their gateway, some to 10.0.0.2.

    The problem with this is it isn't pretty, and doesn't offer true load balancing, but its something. Either way, HSRP is the way to go for failover on gw IPs.


    But if you have the budget for 3750s instead, I really suggest a stack (or two) of those, as its a much better solution than 3500s.
Sign In or Register to comment.