Options

When to implement a three-tiered design

EildorEildor Member Posts: 444
How big would a network need to be in order to justify having a three-tiered design? How do you work out when you need to start implementing switch blocks?

Is it invalid to have a 48 port collapsed core switch connected to 48 access layer switches each with 48 hosts? (that's 2304 hosts). If each access layer switch has its own VLAN to ensure broadcast domains are small would this design then be valid? If not, why? Also, how would you know if CPU utilisation/memory is going to be a bottleneck?

Thank you!

Comments

  • Options
    networker050184networker050184 Mod Posts: 11,962 Mod
    It depends on a lot of things really. More so than number of hosts is traffic volume.

    As far as how would you know, you would monitor the network through SNMP.
    An expert is a man who has made all the mistakes which can be made.
  • Options
    EildorEildor Member Posts: 444
    Hmm okay... so suppose the traffic volume was the maximum (48 gbps) would that then be a problem? What if it was 100 gbps? How would you know before actually implementing the network? Are there particular device specifications you need to look at when deciding on which switches to use so as not to have a problem? I'm just trying to get an idea as to how devices should be selected, what's appropriate and what isn't.

    Thank you.
  • Options
    f0rgiv3nf0rgiv3n Member Posts: 598 ■■■■□□□□□□
    One variable is the traffic volume relies on the backplane of the core switch. The different layer3switches/routers out there have varying throughput on the backplane. Most of the older-style chassis switches (6500) were built with oversubscription with a port to backplane ratio. Say you have 96 gigabit ports on the chassis switch (like a 6500) the backplane could actually be restricted to only 32 gigabits so if you needed to fully utilize your 96 gig ports at the same time, that l3switch with a 32gig backplane would not work for you.

    You have to be careful though with some of the stats that they print out there. Another stat people use to measure for a design would be PPS (packets per second). The issue there is that most of them do their testing with the smallest packet size (64K), which isn't real-life.

    I'm reading through top-down network design right now and it goes into great detail on how to determine the design of a network and what it needs.
  • Options
    7of97of9 Member Posts: 76 ■■■□□□□□□□
    When I worked in enterprise, another consideration for network design was having the logical have some relationship to the actual physical. We'd often do a distribution layer for each campus building, an access layer for each floor, and then a core for aggregating everything on a campus. We'd extend routing to the distribution layer unless VoIP and QoS were a very serious concern. Generally, we segmented vlans based on traffic types, not location, so each distribution would have a vlan for printers, another for phones, another for PCs, etc. That made things easier from a QoS perspective since we had different traffic types on different vlans.

    Generally, we planned out what we wanted the network to look like, which then gave us an idea of what equipment we'd need. So, if a building is very small, we'd size the distribution closet switches accordingly. A larger building would equal beefier layer 3 switches.
    Working on Security+ study, then going back to re-do my Cisco Certs, in between dodging moose and riding my Harley
  • Options
    pertpert Member Posts: 250
    I think there are 3 major factors:
    Physical: Physical size of the campus your network is covering, number of devices that need to be connected physically, cabling issues related to the scope of various equipment rooms and how they connect, ease of adding additional switches to the network
    Logical: Amount of traffic that needs to be switched and routed, logical separations in the network, how much future growth and scalability you need to build , how much redundancy you need
    Money: $$$$$

    Most of the time I think it just comes down # of devices + total bandwidth of those devices + expected future growth, but there are other factors. When networks get very large people will build in block architecture even if its not needed, simply because the standard of using blocks becomes required to manage networks past a certain size. When you get to that size you need to build all blocks the same way, same configs, same naming and labeling schemes, etc. in order to make administration a manageable job. They cannot afford the savings of building a cheaper one-off system to support XYZ because the network is too large to support at that size if quirks like that are allowed. In that scale what I've seen is typically every large equipment room has 2-8 6500s that each support a certain amount of rows, most racks have 2-4 3750 that go back to the 2-4 6500. This will be primary/back up for controls networks and business network, with a pair of 6500s for each. The 6500s in each room will go back to some aggregation or core. I.E they may even go back to another pair of 6500s that then go back to a pair of 7600s or ASR/CRS router or just straight to the core router.

    Some people also will do 3 tiered by having their Dist/Agg Switch do all the the local routing, while the core"is actually just the edge router to the outside. I've seen some 4 tiered designs as well in giant networks where you have 2 layers of Distribution/Agg.

    Collapsed core typically works for must medium/small businesses. I feel the 3 tiered model is designed primarily to deal with issues of scale and being able to manage the network. I dont see any advantage in networks that can be managed by 1 person to use the model, firstly because the financial costs are much harder to justify or get approved, and what do you really gain? Though I think the definition of what 3 tired really is can get muddled. If you have layer 2 switches that go a layer 3 switch and the outside connection is to firewall that routes traffic to your ISP/DMZ/INSIDE is that a 3 tier? I don't think it technically is, but its not that different than in larger networks where the core is just providing the edge routing (though there will also be a firewall here as well, obviously)
  • Options
    CiscodianCiscodian Member Posts: 21 ■□□□□□□□□□
    7of9 wrote: »
    When I worked in enterprise

    I thought you served on board Voyager!icon_cheers.gif
    boom boom...i'm here all week.
  • Options
    7of97of9 Member Posts: 76 ■■■□□□□□□□
    Ciscodian wrote: »
    I thought you served on board Voyager!icon_cheers.gif
    boom boom...i'm here all week.

    LOL!!! That is awesome. icon_lol.gif I didn't even think about the correlation...tells how distracted I am lately!
    Working on Security+ study, then going back to re-do my Cisco Certs, in between dodging moose and riding my Harley
  • Options
    EildorEildor Member Posts: 444
    Thank you all so much for sharing your knowledge; much appreciated!
  • Options
    EildorEildor Member Posts: 444
    pert wrote: »
    Some people also will do 3 tiered by having their Dist/Agg Switch do all the the local routing, while the core"is actually just the edge router to the outside. I've seen some 4 tiered designs as well in giant networks where you have 2 layers of Distribution/Agg.

    Wouldn't that still be considered a collapsed core design though? Because even in a collapsed core design you're going to need edge routers to the outside, right?
  • Options
    pertpert Member Posts: 250
    It seems like the definiton of what access, distribution, aggregation, core, and edge is changes every time I look them up, so I don't know. I tried to specify the actual gear used, so define it however you like. =D
  • Options
    malcyboodmalcybood Member Posts: 900 ■■■□□□□□□□
    having a dedicated core is recommended if you have multiple distribution / aggregation switches.

    It is easier to scale if you have a dedicated core which can have multiple aggregation / distribution switches and in turn access switches added to it.

    I'd suggest that multi building campus designs require a dedicated campus core with multiple disyribution switches across the campus....ie a pair in each separate building.

    For a single building with multiple access layer stacks and only a handful of servers you would typically have a collapsed core/dist and have user switch stacks connected to it and also some kind of server hosting switches connected to it for server access I.e. cisco nexus 5k/2k
Sign In or Register to comment.