JNCIP OSPF Case Study Analysis
I finished JNCIP OSPF Case Study and I was doing some check and I got the below result and I hope to discuse it with u
*** I was trace route 10.0.5.1 from RIP router I got the below O/P
RIP# run traceroute 10.0.5.1
traceroute to 10.0.5.1 (10.0.5.1), 30 hops max, 40 byte packets
1 172.16.40.2 (172.16.40.2) 0.947 ms 0.790 ms 0.739 ms
2 10.0.8.6 (10.0.8.6) 0.786 ms 0.808 ms 0.778 ms
3 10.0.2.2 (10.0.2.2) 0.863 ms 0.905 ms 10.664 ms
4 10.0.4.2 (10.0.4.2) 0.924 ms 0.936 ms 0.912 ms
5 10.0.5.1 (10.0.5.1) 23.012 ms 1.139 ms 1.330 ms
*** which I think it is not the optimal, R3 is chose R2 to reach 10.0.5.1 (R1 interface IP).
R3# run show route 10.0.5.1
inet.0: 26 destinations, 27 routes (26 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both
10.0.5.0/24 *[OSPF/150] 00:00:49, metric 20, tag 420
> to 10.0.4.2 via fe-1/3/0.12
R3# run traceroute 10.0.5.1
traceroute to 10.0.5.1 (10.0.5.1), 30 hops max, 40 byte packets
1 10.0.4.2 (10.0.4.2) 0.930 ms 0.784 ms 0.789 ms
2 10.0.5.1 (10.0.5.1) 0.978 ms 0.974 ms 0.946 ms
*** when I increased the metric of R3-R2 link, I found the path via R1 has the same metric, so why R3 chose R2?
R3# run show route 10.0.5.1
inet.0: 26 destinations, 27 routes (26 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both
10.0.5.0/24 *[OSPF/150] 00:00:34, metric 20, tag 420
> to 10.0.4.14 via fe-1/3/0.11
*** R5 is load balance between both links R5-R3 and R5-R4 and this issue appers when R5 used R3
R5# run show route 10.0.5.1
inet.0: 23 destinations, 25 routes (23 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both
10.0.5.0/24 *[OSPF/150] 00:12:22, metric 42, tag 420
> via at-0/2/1.100
via so-0/3/0.11
*** I was trace route 10.0.5.1 from RIP router I got the below O/P
RIP# run traceroute 10.0.5.1
traceroute to 10.0.5.1 (10.0.5.1), 30 hops max, 40 byte packets
1 172.16.40.2 (172.16.40.2) 0.947 ms 0.790 ms 0.739 ms
2 10.0.8.6 (10.0.8.6) 0.786 ms 0.808 ms 0.778 ms
3 10.0.2.2 (10.0.2.2) 0.863 ms 0.905 ms 10.664 ms
4 10.0.4.2 (10.0.4.2) 0.924 ms 0.936 ms 0.912 ms
5 10.0.5.1 (10.0.5.1) 23.012 ms 1.139 ms 1.330 ms
*** which I think it is not the optimal, R3 is chose R2 to reach 10.0.5.1 (R1 interface IP).
R3# run show route 10.0.5.1
inet.0: 26 destinations, 27 routes (26 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both
10.0.5.0/24 *[OSPF/150] 00:00:49, metric 20, tag 420
> to 10.0.4.2 via fe-1/3/0.12
R3# run traceroute 10.0.5.1
traceroute to 10.0.5.1 (10.0.5.1), 30 hops max, 40 byte packets
1 10.0.4.2 (10.0.4.2) 0.930 ms 0.784 ms 0.789 ms
2 10.0.5.1 (10.0.5.1) 0.978 ms 0.974 ms 0.946 ms
*** when I increased the metric of R3-R2 link, I found the path via R1 has the same metric, so why R3 chose R2?
R3# run show route 10.0.5.1
inet.0: 26 destinations, 27 routes (26 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both
10.0.5.0/24 *[OSPF/150] 00:00:34, metric 20, tag 420
> to 10.0.4.14 via fe-1/3/0.11
*** R5 is load balance between both links R5-R3 and R5-R4 and this issue appers when R5 used R3
R5# run show route 10.0.5.1
inet.0: 23 destinations, 25 routes (23 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both
10.0.5.0/24 *[OSPF/150] 00:12:22, metric 42, tag 420
> via at-0/2/1.100
via so-0/3/0.11
Comments
-
hoogen82 Member Posts: 272You redistributed 10.0.5/24 on both R2 and R1... R2 has a higher router id and hence is preferredIS-IS Sleeps.
BGP peers are quiet.
Something must be wrong.