Options

Help with Eigrp

cjthedj45cjthedj45 Member Posts: 331 ■■■□□□□□□□
Hi
I’m trying to work out how best to replace two Core switches (Core A and Core B) in our network. Below is the configuration from the Access stack switches and Cores. The aim is to ensure that the network stays available during the change. Looking at the below configuration on our Access A and Access B you can see there is two routes. They both have the same default route to Core B marked with D*EX. As the secondary route points to Core A this leads me to believe that I can remove core a and nothing will happen as everything is routing to Core B. What confuses me slightly is that both the default primary route and the feasible successor route have the same metric. So could this in fact mean that both links are being used to load balance the traffic. I believe load balancing is done with the variance command but I cannot see this configured anywhere. If for example everything is routing to Core B then when I remove Core B I need to think about the best way to get all routing to go via Core A without any network down time. These are the ideas have but not sure if they are correct.

1.Just remove Core B as there is a feasible route so as soon as Core B goes down the new route to Core A will be advertised out across the lan.
2.Change the route on one of the access switches so it defaults to Core A this should then be advertised out and all other routers will update their tables with the new route to Core a not core b
3.Change the delay or bandwidth metric on Core A interface so it becomes the primary route.

Access B
Po2 Uplink to CORE-A connected routed a-full a-1000
Po6 Uplink to CORE-B connected routed a-full a-1000

P 0.0.0.0/0, 2 successors, FD is 3072
via 10.248.236.5 (3072/2816), Port-channel2
via 10.248.236.21 (3072/2816), Port-channel6

D*EX 0.0.0.0/0 [170/3072] via 10.248.236.21, 7w0d, Port-channel6
[170/3072] via 10.248.236.5, 7w0d, Port-channel2

router eigrp 3750
redistribute connected
no auto-summary
eigrp router-id 10.248.231.254
eigrp stub connected summary
network 10.248.236.0 0.0.0.255


Access A
Po1 Uplink to CORE-A connected routed a-full a-1000
Po7 Uplink to CORE-B connected routed a-full a-1000

P 0.0.0.0/0, 2 successors, FD is 3072
via 10.248.236.1 (3072/2816), Port-channel1
via 10.248.236.25 (3072/2816), Port-channel7

D*EX 0.0.0.0/0 [170/3072] via 10.248.236.25, 7w0d, Port-channel7
[170/3072] via 10.248.236.1, 7w0d, Port-channel1

router eigrp 3750
redistribute connected
no auto-summary
eigrp router-id 10.248.227.254
eigrp stub connected summary
network 10.248.236.0 0.0.0.255

Core A
interface Port-channel1
description Uplink to ACCESS-A
no switchport
dampening
ip address 10.248.236.1 255.255.255.252
ip hello-interval eigrp 3750 1
ip hold-time eigrp 3750 3
ip authentication mode eigrp 3750 md5
ip authentication key-chain eigrp 3750 eigrp-updates
no ip mroute-cache
load-interval 30
carrier-delay msec 0


interface Port-channel9
description Uplink to SERVER-B
no switchport
dampening
ip address 10.248.236.33 255.255.255.252
ip hello-interval eigrp 3750 1
ip hold-time eigrp 3750 3
ip authentication mode eigrp 3750 md5
ip authentication key-chain eigrp 3750 eigrp-routing-updates
no ip mroute-cache
load-interval 30
carrier-delay msec 0

Comments

  • Options
    ecbanksecbanks Member Posts: 22 ■■■□□□□□□□
    Actually, you do have equal-cost routes, which EIGRP supports by default. You have 2 "successors" in the topology database (not a successor plus a feasible successor), and you've got 2 default routes in the routing table of your access switches. Variance only comes into play when you want to do unequal cost load-balancing. From the output you printed, your access switches are load-balancing across the 2 port-channel uplinks to Core-A and Core-B. This is a pretty typical design.

    My tendency is to manually shut down links on the core switch to be upgraded. That forces all the traffic over to the other core switch. That way, you can verify that traffic is flowing through the other core switch just fine before proceeding with the upgrade on the first core switch. Upgrade the core switch, unshut the interfaces to normalize traffic, then do the same process on the opposite core switch...shut down interfaces, verify the network is good, upgrade the switch, unshut interfaces, verify everything is normalized.

    I tend not to mess with EIGRP metrics just to swing traffic, because it's easy to forget to undo something you did, and it could affect the larger EIGRP domain, just depending on the overall topology...probably not in a core/access scenario like you're describing, but I like to keep it as simple as possible. There's nothing simpler than shutting down an interface to make sure it's out of play, and then "no shut" to bring it back when ready. YMMV.

    There's certainly other approaches, too. I'm sure everyone has their favorite way to upgrade a pair of core switches while making sure the network is stable throughout.

    HTH.

    /Ethan
    /Ethan
  • Options
    MrBrianMrBrian Member Posts: 520
    Ethan, welcome to the forum! Love the packet pushers!! (who doesn't). Seriously, great to have you on here. There's some real sharp people on here that I've learned a lot from. You're adding to the mix in a huge way, thanks.

    This post is crazy btw. Not sure if you can keep up in depth posts at this rate lol.. but seriously, much appreciated.
    Currently reading: Internet Routing Architectures by Halabi
  • Options
    cjthedj45cjthedj45 Member Posts: 331 ■■■□□□□□□□
    ecbanks wrote: »
    Actually, you do have equal-cost routes, which EIGRP supports by default. You have 2 "successors" in the topology database (not a successor plus a feasible successor), and you've got 2 default routes in the routing table of your access switches. Variance only comes into play when you want to do unequal cost load-balancing. From the output you printed, your access switches are load-balancing across the 2 port-channel uplinks to Core-A and Core-B. This is a pretty typical design.

    My tendency is to manually shut down links on the core switch to be upgraded. That forces all the traffic over to the other core switch. That way, you can verify that traffic is flowing through the other core switch just fine before proceeding with the upgrade on the first core switch. Upgrade the core switch, unshut the interfaces to normalize traffic, then do the same process on the opposite core switch...shut down interfaces, verify the network is good, upgrade the switch, unshut interfaces, verify everything is normalized.

    I tend not to mess with EIGRP metrics just to swing traffic, because it's easy to forget to undo something you did, and it could affect the larger EIGRP domain, just depending on the overall topology...probably not in a core/access scenario like you're describing, but I like to keep it as simple as possible. There's nothing simpler than shutting down an interface to make sure it's out of play, and then "no shut" to bring it back when ready. YMMV.

    There's certainly other approaches, too. I'm sure everyone has their favorite way to upgrade a pair of core switches while making sure the network is stable throughout.

    HTH.

    /Ethan

    Hi Ethan,

    First of all thanks very much for you reply. On the Core switches there are a few other uplinks to server stacks and a Trunk to firewall switch etc. So if I understand correctly what you are saying is that in fact there is no feasable succesor but 2 succesors and a default route to Core B. Therefore I can go to Core A and shutdown the uplinks or In my case I will power down the whole router becuase I will be replacing it with a new one. At this point I should be okay because all the devices have a default route to Core B and will continue routing to Core B as if nothing has happened. Now when I come to replace Core B I do the same as before but becuase there is a succesor route to Core A this will be used and therefore there will be no loss in connectivty. This option sounds much simplier than changing metrics etc. Let me know if you think thay sounds about right. I'm a bit rusty on Eigrp and have only learned it at CCNA level. This is one of my first real projects that I managing from start to finish so want to get it right. I will also be reconfiguring the access stacks into two new stack of 4 and 6. Fingers crossed I can pull it off. Thanks again for your response its much appreciated.
  • Options
    ecbanksecbanks Member Posts: 22 ■■■□□□□□□□
    cjthedj45 wrote: »
    Hi Ethan,

    First of all thanks very much for you reply. On the Core switches there are a few other uplinks to server stacks and a Trunk to firewall switch etc. So if I understand correctly what you are saying is that in fact there is no feasable succesor but 2 succesors and a default route to Core B. Therefore I can go to Core A and shutdown the uplinks or In my case I will power down the whole router becuase I will be replacing it with a new one. At this point I should be okay because all the devices have a default route to Core B and will continue routing to Core B as if nothing has happened. Now when I come to replace Core B I do the same as before but becuase there is a succesor route to Core A this will be used and therefore there will be no loss in connectivty. This option sounds much simplier than changing metrics etc. Let me know if you think thay sounds about right. I'm a bit rusty on Eigrp and have only learned it at CCNA level. This is one of my first real projects that I managing from start to finish so want to get it right. I will also be reconfiguring the access stacks into two new stack of 4 and 6. Fingers crossed I can pull it off. Thanks again for your response its much appreciated.

    That does sound about right, yes. I'm basing that on your CLI output in the first post where you display routes and topology for your default route. A good thing to do before you power down that first Core-A to replace it is to make sure you've checked all of your access switches to be sure that their uplink to Core-B is working properly (the links are up, etc.) You wouldn't want one of your uplinks to Core-B to be dead, and you don't notice it until Core-A goes down and all of a sudden that access switch can't talk to the network. icon_sad.gif

    Same thing with firewalls, servers, and anything else that's got dual-uplinks to Core-A and Core-B. Make sure that both uplinks are really working before you begin maintenance to minimize risk of an outage. Sometimes a dead link can go unnoticed if all services are available.

    Another thing you might like to do to help ease you mind during planning/preparation is to use a whiteboard and sketch out a high-level diagram of the network. Then erase Core-A from the board, and think about how your network traffic should be flowing when only Core-B is left. Ask yourself about routing tables, spanning-tree, uplinks, etc. How would this user get to the firewall? How would this user get to this server? Etc. Doing the whiteboard sketch might reveal a problem you didn't think of because nerves are crowding out logic. The visual stuff always helps me.

    One more thought is that you minimize risk when you minimize the number of changes you're making at a whack. You're doing access-layer as well as core-layer changes. Do-able...but maybe simpler just do the core upgrade in one maintenance window and the access stuff in another window once you're happy the upgraded core is working like you want? Troubleshooting issues after a big change window can be painful is why I bring that up. But I know that sometimes the business will only give you one window to get stuff done in.

    /Ethan
    /Ethan
  • Options
    cjthedj45cjthedj45 Member Posts: 331 ■■■□□□□□□□
    ecbanks wrote: »
    That does sound about right, yes. I'm basing that on your CLI output in the first post where you display routes and topology for your default route. A good thing to do before you power down that first Core-A to replace it is to make sure you've checked all of your access switches to be sure that their uplink to Core-B is working properly (the links are up, etc.) You wouldn't want one of your uplinks to Core-B to be dead, and you don't notice it until Core-A goes down and all of a sudden that access switch can't talk to the network. icon_sad.gif

    Same thing with firewalls, servers, and anything else that's got dual-uplinks to Core-A and Core-B. Make sure that both uplinks are really working before you begin maintenance to minimize risk of an outage. Sometimes a dead link can go unnoticed if all services are available.

    Another thing you might like to do to help ease you mind during planning/preparation is to use a whiteboard and sketch out a high-level diagram of the network. Then erase Core-A from the board, and think about how your network traffic should be flowing when only Core-B is left. Ask yourself about routing tables, spanning-tree, uplinks, etc. How would this user get to the firewall? How would this user get to this server? Etc. Doing the whiteboard sketch might reveal a problem you didn't think of because nerves are crowding out logic. The visual stuff always helps me.

    One more thought is that you minimize risk when you minimize the number of changes you're making at a whack. You're doing access-layer as well as core-layer changes. Do-able...but maybe simpler just do the core upgrade in one maintenance window and the access stuff in another window once you're happy the upgraded core is working like you want? Troubleshooting issues after a big change window can be painful is why I bring that up. But I know that sometimes the business will only give you one window to get stuff done in.

    /Ethan





    Ethan again thanks for a great response. Okay so what your saying is it would be better to first close each up link and make sure that the access stack that uses that uplink is still connected and traffic flowing? I have peformed a due dillegence exercise to see what services could be affected and I will be doing a proof of concept before the actual change. This will help understand if any services are likely to be affected. The change will also be over two days as like you said that could be adding to much pressure trying to get it all done in one night. The change will be done as follows

    13th of March Proof of concept.
    Close uplinks down one by one on Core A after each uplink is closed check connectivty. Repeat for Core B.

    30th of March
    Swap Core Switches to two new 48 port 3750 switches.

    31st of March
    Split access stacks on Ground floor to two stacks of three. First floor we have 7 switches stacked I will add one and split to make two stacks of 4

    I have done some network diagrams. I think I will get one up on the whiteboard and do as you said though as they may reveal something that has been left out. I was doing a patch diagram for the stacks the other day and realised that we have a span port that mirrors voice traffic to be recorded. As I will be splitting the stacks we will need a new span port on each new stack and then patched to the call recording server. This may throw a spanner in the works as the Telco team need to check if the server has enough network cards and what configuration from their side will be required.

    I just want to make sure I get it right as I mentioned this is my first real big project on my own. I have done smaller projects but it can be quite nerveracking on the night. I think the key is to make sure all planning and due dillegence is done in advanced, configuration is completed. So it leaves as little as possible on the night to do. There is always that unforseen that can throw you and that can be quite overwhelming on the night especially as Im changing so much. Its knowing where to look and isolating the issue down to a few things. 2 years ago I had never even logged on to a switch and now I'm performing changes like these. I'm happy with my progress and I'm still learning all the time. I do find it frustrating when I dont always have the answer and I think there is still gaps in my knowledge. Im hoping that by the time I have finshed the CCNP Security and CCNP track I will have a more rounded knoweledge and be able to help people on techexams like you have helped me.

    In the last response Packet Pusher was mentioned I was not sure what this was. I have since looked it up and it looks like a good site and it has a lot of good reviews. I'm going to download the podcasts and listen to them on the way to work.

    Thanks once again for all your help. You have really helped me with EIGRP and provided me with some good tips too. Its much appreciated. Hopefully I will be on the CCNP track next year so will gain a more fuller understanding of Eigrp then.
  • Options
    ecbanksecbanks Member Posts: 22 ■■■□□□□□□□
    cjthedj45 wrote: »
    I just want to make sure I get it right as I mentioned this is my first real big project on my own. I have done smaller projects but it can be quite nerveracking on the night. I think the key is to make sure all planning and due dillegence is done in advanced, configuration is completed. So it leaves as little as possible on the night to do. There is always that unforseen that can throw you and that can be quite overwhelming on the night especially as Im changing so much. Its knowing where to look and isolating the issue down to a few things. 2 years ago I had never even logged on to a switch and now I'm performing changes like these. I'm happy with my progress and I'm still learning all the time. I do find it frustrating when I dont always have the answer and I think there is still gaps in my knowledge. Im hoping that by the time I have finshed the CCNP Security and CCNP track I will have a more rounded knoweledge and be able to help people on techexams like you have helped me.

    In the last response Packet Pusher was mentioned I was not sure what this was. I have since looked it up and it looks like a good site and it has a lot of good reviews. I'm going to download the podcasts and listen to them on the way to work.

    You have a good plan, I think. Your POC idea of shutting down ports on one core switch is a good one. Preparation pays off. And yes, I know what you mean about changes being nerve-racking. I've done an awful lot of changes over the years, and I still get amped up when performing certain tasks. When you get to the point of actually making changes, remember to take it one step at a time, and verify along the way as you go, if at all possible. Another thought is to consider a back-out plan as you prepare. The back-out plan is what you'll do if things just aren't going well, and you need to put everything back the way it was and re-group. You might think "I'll never go back!", but sometimes bad things happen during a change that you just can't resolve. You'll be tired and flustered. Having that back out plan to follow can be really helpful to make sure you put the network back the way it needs to go, if it comes to that.

    Another thought - don't forget the little things. If you're in the middle of a change and are trying to figure out why an EIGRP adjacency didn't come up, start at the basics. Is the link plugged in? Is the interface "no shutdown"? That kind of stuff, more often than not, is at the root of simple connectivity issues.

    Packet Pushers is a community-driven podcast that I helped to start. We talk networking on the show, and I, along with many others, contribute blog articles to the site.
    /Ethan
Sign In or Register to comment.