Options

Cat6509 question

mikearamamikearama Member Posts: 749
hey Techies.

We have a 6509 with dual sup's in our UAT, another in our HA-Prod environment, but I'm now putting together the design to replace our core 4500's with a pair of 09's. The question I have I can't find a definitive answer to online.

Picture a high-avail pair of 09's, etherchannelled together. My thinking is to dual-sup both, but then I wondered... if I single sup each 09, and one of the sup's in one of the chassis fails, does the chassis with the failed sup continue to operate at layer 2? In other words, can I expect all traffic received on the chassis with the failed sup to automatically get carried over to the running 09 to be routed?

I just want to confirm that an 09 continues to operate as a switch, and doesn't completely shut down, when the sup disappears.

Thanks,
Mike
There are only 10 kinds of people... those who understand binary, and those that don't.

CCIE Studies: Written passed: Jan 21/12 Lab Prep: Hours reading: 385. Hours labbing: 110

Taking a time-out to add the CCVP. Capitalizing on a current IPT pilot project.

Comments

  • Options
    chrisonechrisone Member Posts: 2,278 ■■■■■■■■■□
    Depending on the failure on the 6500. If its the sup card that went out, then yes the switch will still function with the help of the redundant supervisor card on your secondary 6500's supervisor card. You will have to setup failover/redundancy between both supervisor cards and tie then with either 1gb or 100mb interfaces on the sup you are using. Some high end sups have 10gb uplinks between sups.

    hope this helps...
    Certs: CISSP, EnCE, OSCP, CRTP, eCTHPv2, eCPPT, eCIR, LFCS, CEH, SPLK-1002, SC-200, SC-300, AZ-900, AZ-500, VHL:Advanced+
    2023 Cert Goals: SC-100, eCPTX
  • Options
    mikearamamikearama Member Posts: 749
    Yep, fabulous. Thanks for the info. And it will definately be 10Gb uplinks.

    Any chance you've put any links to information on this in your Favourites? I could stand to do some reading on setting up supervisor failover.
    There are only 10 kinds of people... those who understand binary, and those that don't.

    CCIE Studies: Written passed: Jan 21/12 Lab Prep: Hours reading: 385. Hours labbing: 110

    Taking a time-out to add the CCVP. Capitalizing on a current IPT pilot project.
  • Options
    chrisonechrisone Member Posts: 2,278 ■■■■■■■■■□
    Sure here is the BIBLE on 6500 configuration guide. Free from cisco.

    Catalyst 6500 Series Software Configuration Guide, 8.7 [Cisco Catalyst 6500 Series Switches] - Cisco Systems

    Download the PDF. Look for "Configuring Redundancy" all your answers are in there icon_smile.gif

    Sorry i misunderstood your question earlier. I think the answer to your question maybe with HSRP or GLBP, which will tie your 6500s into one logical core, even though a supervisor card may go out, at least one will be up and running while traffic from your LAN is pointing to the HSRP or GLBP virtual IP, in which case the second 6500 will take over. I would prefer GLBP as this high availability protocol will load balance traffic to both 6500. HSRP will keep one 6500 sitting there twittling its thumbs. Sorry if i confused you.
    Certs: CISSP, EnCE, OSCP, CRTP, eCTHPv2, eCPPT, eCIR, LFCS, CEH, SPLK-1002, SC-200, SC-300, AZ-900, AZ-500, VHL:Advanced+
    2023 Cert Goals: SC-100, eCPTX
  • Options
    jason_lundejason_lunde Member Posts: 567
    dude check out the sup VS-720-10GE...they support virtual switching system which would be perfect for you. We have a pair of 6509s in production now with those in, and just got a pair in the lab to test the changeover to VSS. Cool technology, and great for redundancy.
  • Options
    TurgonTurgon Banned Posts: 6,308 ■■■■■■■■■□
    All useful stuff for those who never work with 6509's but would like to one day. Should sticky things like this if we can.
  • Options
    chrisonechrisone Member Posts: 2,278 ■■■■■■■■■□
    I like VSS as well, its newer technology i havent really gotten a chance to play with yet.
    Certs: CISSP, EnCE, OSCP, CRTP, eCTHPv2, eCPPT, eCIR, LFCS, CEH, SPLK-1002, SC-200, SC-300, AZ-900, AZ-500, VHL:Advanced+
    2023 Cert Goals: SC-100, eCPTX
  • Options
    mikearamamikearama Member Posts: 749
    Thanks guys... tonnes of good stuff in there. And fab link Chris... right into the favourites.

    My concern was because I have many (read: most) servers that are not multihomed, so they will connect to either Core1 or Core2, but not both, and not to a 3750 stack that has redundant links to both. So if a server is plugged into C1 and the sup fails, I wanted to make sure that the traffic would still pass at layer 2 to C2 to be routed. I'm most familiar with HSRP, so I'll set that up... just never had servers directly connected to the cores before. In our current config, we run 5 server stacks of 3750's, all connected redundantly to both 4500's. HSRP is ideal in this situation. I feared it might be a little different when directly connected, as there is no redundant connection.

    So, lots of reading to do. Thanks again.
    There are only 10 kinds of people... those who understand binary, and those that don't.

    CCIE Studies: Written passed: Jan 21/12 Lab Prep: Hours reading: 385. Hours labbing: 110

    Taking a time-out to add the CCVP. Capitalizing on a current IPT pilot project.
  • Options
    dtlokeedtlokee Member Posts: 2,378 ■■■■□□□□□□
    mikearama wrote: »
    hey Techies.

    Picture a high-avail pair of 09's, etherchannelled together. My thinking is to dual-sup both, but then I wondered... if I single sup each 09, and one of the sup's in one of the chassis fails, does the chassis with the failed sup continue to operate at layer 2? In other words, can I expect all traffic received on the chassis with the failed sup to automatically get carried over to the running 09 to be routed?


    I must be missing something here. You are asking if a 6509 with a single supervisor that is failed will still function?
    The only easy day was yesterday!
  • Options
    joshgibson82joshgibson82 Member Posts: 80 ■■□□□□□□□□
    mikearama wrote: »
    Thanks guys... tonnes of good stuff in there. And fab link Chris... right into the favourites.

    My concern was because I have many (read: most) servers that are not multihomed, so they will connect to either Core1 or Core2, but not both, and not to a 3750 stack that has redundant links to both. So if a server is plugged into C1 and the sup fails, I wanted to make sure that the traffic would still pass at layer 2 to C2 to be routed. I'm most familiar with HSRP, so I'll set that up... just never had servers directly connected to the cores before. In our current config, we run 5 server stacks of 3750's, all connected redundantly to both 4500's. HSRP is ideal in this situation. I feared it might be a little different when directly connected, as there is no redundant connection.

    So, lots of reading to do. Thanks again.

    I gotta ask...why would you plug a server into a core router? That is the access layer. Thanks
    Josh, CCNP CWNA
  • Options
    Forsaken_GAForsaken_GA Member Posts: 4,024
    I gotta ask...why would you plug a server into a core router? That is the access layer. Thanks

    Yeah, I didn't want to be the one to say it hehe. When I got to my current job, we had a similar situation. Me and the Senior net eng fixed that situation *really* quick. Now, the only servers we have plugged directly into our core is our quagga box for route reflector duties, and our DNS servers, which were doing anycast for
  • Options
    CChNCChN Member Posts: 81 ■■□□□□□□□□
    chrisone wrote: »
    I like VSS as well, its newer technology i havent really gotten a chance to play with yet.

    If you ever have a chance to play with it, thoroughly read the docs first! Pulled my hair out for a week over this thing. If my memory serves me correctly, it only supports 6700 series line cards so be prepared to migrate all your existing connections. And, under one of the IOS....I think it was SXI3....some of the documented commands don't take.

    This guy has a decent tutorial if anyone is interested:

    Should Have Gone With Cisco Blog Archive Cisco Virtual Switching Systems (VSS)
    RFCs: the other, other, white meat.
  • Options
    chrisonechrisone Member Posts: 2,278 ■■■■■■■■■□
    CChN wrote: »
    If you ever have a chance to play with it, thoroughly read the docs first! Pulled my hair out for a week over this thing. If my memory serves me correctly, it only supports 6700 series line cards so be prepared to migrate all your existing connections. And, under one of the IOS....I think it was SXI3....some of the documented commands don't take.

    This guy has a decent tutorial if anyone is interested:

    Should Have Gone With Cisco Blog Archive Cisco Virtual Switching Systems (VSS)

    Awesome, thanks! i will definetly check it out. Hey correct me if i am wrong , but doesnt VSS eliminate Spanning-Tree? I remember reading up on that some time ago.
    Certs: CISSP, EnCE, OSCP, CRTP, eCTHPv2, eCPPT, eCIR, LFCS, CEH, SPLK-1002, SC-200, SC-300, AZ-900, AZ-500, VHL:Advanced+
    2023 Cert Goals: SC-100, eCPTX
  • Options
    jason_lundejason_lunde Member Posts: 567
    chrisone wrote: »
    Awesome, thanks! i will definetly check it out. Hey correct me if i am wrong , but doesnt VSS eliminate Spanning-Tree? I remember reading up on that some time ago.

    Ya, in between the layers that it services. Because it basically turns 2 6509's into one logical device. It eliminates spanning-tree and any first hop redundancy protocol that was previously being used at that layer (usually the dist or core). The other cool thing it allows is multi-chassis etherchannel. Another pretty cool technology. As I said in a previous post we are setting this up in our lab now...I will try and post any cool results we see somewhere in the forum.
  • Options
    mikearamamikearama Member Posts: 749
    I gotta ask...why would you plug a server into a core router? That is the access layer. Thanks

    Good question, this one. The consensus of our architect and IS Manager is that for the amount of routing our internal cores will do, it would be a huge waste of resources/cycles NOT to use the 6500's as switches as well. We run a collapsed core, and currently don't employ distribution switches... our servers connect to 5 3750 stacks, directly and redundantly connected to the cores.

    The plan to move from 4500's to 6500's further emboldened our architect that they can handle double duty as layer 2 switches and layer 3 routers. Tough to argue, considering the muscle of the 6500's.

    And this is our internal lan... HA has its own 6509, and UAT has its own 6504.

    So, politics being politics, that's the plan.

    Oh, and it frees up a large chunk of the budget, which I'll be putting toward either our wireless infrastructure (really want some controllers), or the falls' voice rollout.
    There are only 10 kinds of people... those who understand binary, and those that don't.

    CCIE Studies: Written passed: Jan 21/12 Lab Prep: Hours reading: 385. Hours labbing: 110

    Taking a time-out to add the CCVP. Capitalizing on a current IPT pilot project.
  • Options
    CGN_SpecCGN_Spec Member Posts: 96 ■■□□□□□□□□
    Great job guys! This is a great post.
  • Options
    TurgonTurgon Banned Posts: 6,308 ■■■■■■■■■□
    mikearama wrote: »
    Good question, this one. The consensus of our architect and IS Manager is that for the amount of routing our internal cores will do, it would be a huge waste of resources/cycles NOT to use the 6500's as switches as well. We run a collapsed core, and currently don't employ distribution switches... our servers connect to 5 3750 stacks, directly and redundantly connected to the cores.

    The plan to move from 4500's to 6500's further emboldened our architect that they can handle double duty as layer 2 switches and layer 3 routers. Tough to argue, considering the muscle of the 6500's.

    And this is our internal lan... HA has its own 6509, and UAT has its own 6504.

    So, politics being politics, that's the plan.

    Oh, and it frees up a large chunk of the budget, which I'll be putting toward either our wireless infrastructure (really want some controllers), or the falls' voice rollout.

    Just another example of real world colliding with the design world. If it works do it.
  • Options
    Forsaken_GAForsaken_GA Member Posts: 4,024
    Network Designs are based on Politics, Money, and The Right Way To Do It - in that order.

    -Gary A. Donahue, Network Warrior


    Depending on how much traffic he's pushing through there, he can probably get away with it. Going to play havoc with the ability to scale though
  • Options
    mikearamamikearama Member Posts: 749
    Depending on how much traffic he's pushing through there, he can probably get away with it. Going to play havoc with the ability to scale though

    The network has handled the traffic for 5 years, just via 3750 stacks and etherchannel uplinks. Now it'll handle layer 2 on the backplane. If anything, based on our server load, I expect increased performance, as previously, our server subnets spanned all 5 stacks, requiring servers on different stacks to cross the core to talk to each other. I KNOW that's not considered best practice.

    I'll do a better job on the re-org... and group server subnets together. Nothing but back-plane, baby!

    And if we run out of switch ports, we can either add another switch module, or connect up a switch stack.

    Yep, politics is politics, but I'm going to make the most of it.
    There are only 10 kinds of people... those who understand binary, and those that don't.

    CCIE Studies: Written passed: Jan 21/12 Lab Prep: Hours reading: 385. Hours labbing: 110

    Taking a time-out to add the CCVP. Capitalizing on a current IPT pilot project.
  • Options
    TurgonTurgon Banned Posts: 6,308 ■■■■■■■■■□
    Network Designs are based on Politics, Money, and The Right Way To Do It - in that order.

    -Gary A. Donahue, Network Warrior


    Depending on how much traffic he's pushing through there, he can probably get away with it. Going to play havoc with the ability to scale though

    Very true and you must put your technical reasons forward and misgivings however powerful politics and money are. At least they were told! As for scale, I suppose that really depends on what the plans are for the future, but as we know, whatever they are, they are probably wrong ;)
  • Options
    marlon23marlon23 Member Posts: 164 ■■□□□□□□□□
    A lot of has been told already, would like to add few things as I work with 6500s a lot.

    For your implementation dual supervisors in each 6509 is a must, use SSO as redundancy mode and you might consider to introduce NSF as well. With dual Supervisors and NSF/SSO, in case one of sups in 6509 will fail, almost all services will recover on a 'backup' sup under 1s.

    Make sure you have dual power supplys, each capable to handle full power load alone (power redundancy).

    VSS is very cool feature, you dont need to mess with HSRP groups, trunks, spanning tree, a lot of SVIs, etc. However currently VSS supports only one Supervisor in each chassis. (Contact cisco/partner to tell you when/if this will be supported). Go with SXI, in this one VSS is in IP base feature set.

    And one last thing, get SmartNET on those boxes :)
    LAB: 7609-S, 7606-S, 10008, 2x 7301, 7204, 7201 + bunch of ISRs & CAT switches
Sign In or Register to comment.