Options

L3 Switch: How to understand backplane

DeathmageDeathmage Banned Posts: 2,496
Hey guys,

So I'm curious if someone could explain backplanes. I always thought I understood it now I'm wondering if I ever did.

Like I'm trying to understand if say I put a Cisco 3750E-24-T-E or a Dell N3024 at the collapsed core at a remote location for a SRM site if the backplanes will even make a difference.

Like say a 3750E has 160 GBps backplane and a equal Dell N3024 has a 212 GBps backplane (both L3's) will it even matter if the connection for L2 is two bonded 10G's for traffic going over the wire?

The epeen battle on vendors about my backplane is bigger than yours makes my head spin, when asked why they have it higher they won't give me a response.

I'm actually wondering now if it would just be easier at the SRM site to plug the VMware hosts into the collapsed core like I do in the primary site unless I could be told this concept of network L3 backplane epeen wars. icon_wink.gif

Comments

  • Options
    hurricane1091hurricane1091 Member Posts: 919 ■■■■□□□□□□
    I've never heard of the term collapsed core until now, and it would seem that we're not doing that at my place of work. I'm not a VMware or server person at all, so I can't really help here. But, my understanding is this. Switching takes place in hardware, and goes much faster than routing (which takes place in software). If you have 20 GB total connection running into the Cisco device or this dell device and that is it, I don't see the backplane mattering - right?

    I hope someone elaborates further, I am curious to learn. I do know that routers have limited throughput though. Case in point, I configured two ASRs last week that are going in this week because they have a higher throughput than the ISRs in place.
  • Options
    networker050184networker050184 Mod Posts: 11,962 Mod
    Backplane/fabric is basically the hardware that provides connectivity between ports. So say you have 10 1G ports you'd need a 10G backplane (or 20G in marketing speak for the most part as they count full duplex) for all of these ports to forward at line rate at any point in time. So does it matter in your situation? Probably not. Either of those backplane sizes would likely meet your needs, but you also have to consider things like ASIC mapping, how many ports are used and how heavily etc. It can get complex. I'd suggest starting with learning how a general "switching fabric" works and go from there into your specific hardware. These days you can almost always get "non blocking" back planes in smaller switches which means the back plane has enough capacity to theoretically forward all ports line rate at the same time. Of course that isn't always how the real world works. It's usually going to be multiple ports forwarding towards a single port. Think 10 servers all forwarding towards an uplink port for example so take the fabric with a grain of salt depending on your traffic patterns.

    Then you get into chassis based switches and go further down the rabbit hole.

    As far as connecting hosts to a collapsed core I really try to stay away from that as I like my core/dist to be free of hosts. Of course it always depends on design, budget, goals etc.
    Switching takes place in hardware, and goes much faster than routing (which takes place in software).

    Usually everything is done in hardware on modern gear once you get out of the very bottom of the entry level products. L2/L3 forwarding is basically the same rate.
    An expert is a man who has made all the mistakes which can be made.
  • Options
    DeathmageDeathmage Banned Posts: 2,496
    Backplane/fabric is basically the hardware that provides connectivity between ports. So say you have 10 1G ports you'd need a 10G backplane (or 20G in marketing speak for the most part as they count full duplex) for all of these ports to forward at line rate at any point in time. So does it matter in your situation? Probably not. Either of those backplane sizes would likely meet your needs, but you also have to consider things like ASIC mapping, how many ports are used and how heavily etc. It can get complex. I'd suggest starting with learning how a general "switching fabric" works and go from there into your specific hardware. These days you can almost always get "non blocking" back planes in smaller switches which means the back plane has enough capacity to theoretically forward all ports line rate at the same time. Of course that isn't always how the real world works. It's usually going to be multiple ports forwarding towards a single port. Think 10 servers all forwarding towards an uplink port for example so take the fabric with a grain of salt depending on your traffic patterns.

    Then you get into chassis based switches and go further down the rabbit hole.

    As far as connecting hosts to a collapsed core I really try to stay away from that as I like my core/dist to be free of hosts. Of course it always depends on design, budget, goals etc.



    Usually everything is done in hardware on modern gear once you get out of the very bottom of the entry level products. L2/L3 forwarding is basically the same rate.

    I'm humbled by your response. :) ... that explains it much better.

    I'm probably going to lean towards a Cisco 2811 as edge router, which current has a fiber uplink to the ISP CSU and then use the gigabit connections to two Cisco 3750E's (24 port) and then the 3750E's will have a Stackwise collapsed core/distro with a two bonded 10G's to the 2 Cisco 2960G's in the Super stack then each have one 450ft bonded twin fiber uplinks to two IDF's.

    Just was tossed on either using a Dell switch or a Cisco switch because the backplanes confused me. Right now the core/distro is a 3550 at this remote location and I was wigging out why the bandwidth was sooo slow.... the previous IT person connected the edge router and access layer both at 1000 to a 100 L3 switch /faceplam! ... this week is my 1st time being at the remote location in my tenure here.
Sign In or Register to comment.