Improving network performance with switched network re-design

Hey guys,

A company currently has 9 switches connected together in a chain like below. Let's assume each connected with a single trunk link and the switches are 3560's.

flat.jpg

They are experiencing poor network performance across the switched network and are putting it down to the flat design.

They have two 3750G-12s switches that they want to connect to each of the 9 switches to in order to improve performance. Something like the below.

layered.jpg

As well as the redundancy created by the second 3750 they would like to load balance the traffic between the two switches instead of only one being utilised and the other sitting there as backup.

I'm looking for suggestions on how to accomplish the load balancing, cheers.

I guess the suggestions would be very general as there isn't much detail in how the network is currently configured. Will there be end to end vlan's or local to each switch etc.
"There are 3 types of people in this world, those who can count and those who can't"

Comments

  • mrblackmamba343mrblackmamba343 Inactive Imported Users Posts: 136
    second design might work. Have they considered any higher end switch at all? 4500 or 6500?


    Plus the first design was experiencing problems because some of the switches did not have a direct connection to the root bridge. Traffic had to be passed around a couple more switches before they get to the root. Al lot of people ignore root bridge selection when they configure a switch network but if the wrong switch becomes the root your high end switches will not perform well.

    make the core switch the root and the backup the secondary root
  • Forsaken_GAForsaken_GA Member Posts: 4,024
    That first design is... horrid.

    The second design would be much better.

    As far as utilizing both switches..... the easy way is to use GLBP, but that's not going to happen with 3750's acting as you distribution switches. The only redundancy protocol you can use is HSRP, so you're only going to have one switch forwarding traffic unless you do a little load balancing voodoo.

    As far as load balancing goes.. about the only option you have is to use two HSRP groups, one with the first switch as the master of group 1, and one with the second switch as master of group 2. Then change half your hosts (or however many you want) over to use the virtual IP of the second HSRP group as it's default gateway.

    If it's in the budget, getting a pair of 4500's and doing GLBP would be the way to go.
  • 4E65644E6564 Member Posts: 32 ■■□□□□□□□□
    Well GLBP would only work from your Dist switches up to your router, unless you are running Layer 3 on the Dist switches (and upgrade them).

    If you want to keep your switches on Layer 2, you could set one of the Dist switches to be the Root Bridge for half of your VLANs, and the other Dist switch to be the Root Bridge for the remaining VLANs.


    I would personally prefer the Dist switches running layer 3, the intervlan traffic will be much faster (over going up to a router (RoAS)). You could then use GLBP(with an upgrade), or HSRP and have half of your VLANs use Dist 1 as the active etc etc.
  • Forsaken_GAForsaken_GA Member Posts: 4,024
    4E6564 wrote: »
    Well GLBP would only work from your Dist switches up to your router, unless you are running Layer 3 on the Dist switches (and upgrade them).

    If you want to keep your switches on Layer 2, you could set one of the Dist switches to be the Root Bridge for half of your VLANs, and the other Dist switch to be the Root Bridge for the remaining VLANs.

    I would personally prefer the Dist switches running layer 3, the intervlan traffic will be much faster (over going up to a router (RoAS)). You could then use GLBP(with an upgrade), or HSRP and have half of your VLANs use Dist 1 as the active etc etc.

    With a topology that flat, I'm betting they don't have much of a vlan seperation, so load balancing over layer 2 like that would necessitate a renumbering to break everything out into vlans, and that's a huge pain in the ass (it's probably the broadcast traffic that's killing them right now). He'd be much better off establishing the distro switches at layer 3 and then either start migrating hosts on a case by case basis into new vlans and renumber them slowly, or to just establish a vlan policy for each new host that's brought up and let things migrate over time that way (and that may be his only viable choice if a renumbering of existing hosts is unfeasible)
  • mikej412mikej412 Member Posts: 10,086 ■■■■■■■■■■
    Spanning Tree Protocol Problems and Related Design Considerations - Cisco Systems
    Another issue that is not well known relates to the diameter of the bridge network. The conservative default values for the STP timers impose a maximum network diameter of seven. This maximum network diameter restricts how far away from each other bridges in the network can be. In this case, two distinct bridges cannot be more than seven hops away from each other. Part of this restriction comes from the age field that BPDUs carry.

    While there may be a "quick" configuration solution to their poor network performance, I'd still add those switches.
    :mike: Cisco Certifications -- Collect the Entire Set!
  • ConstantlyLearningConstantlyLearning Member Posts: 445
    Looks like HSRP would be the way to go.

    Lets there are 10 VLAN's.
    You could create standby groups for each VLAN, assign one of the distribution switches as the active for half the VLAN's and the other distribution switch as the active for the others.
    For the hosts in each VLAN you would assign the respective virtual IP as the default gateway.
    This would enable failover and load sharing.


    Considering all switches are layer three capable you could have layer three etherchannels between all switches with routing protocols running over them for faster convergence and higher bandwidth links.


    I've set up a simple HSRP lab at the moment. A 2950 connected to two 3550's. VLAN 10 on all switches. Interface VLAN 10 on the two 3550's. I have a 2611 router connected to the 2950 for telnetting and pinging from. I currently have the hello timers set to 200ms and the dead timers set to 800ms. When I run an extended ping from the 2611 to the VIP and reload the active switch I lose one packet. Has any configured the times to as low as possible in a production network? Were there any issues with this?

    Cheers.
    "There are 3 types of people in this world, those who can count and those who can't"
Sign In or Register to comment.