Compare cert salaries and plan your next career move
kryolla wrote: » load balancing via equal or unequal cost links involve 2 things 1 is the routing protocol having the same metric to the same destination which puts the links in the routing table 2 is the switching method either CEF or process. Process is per packet so 1 packet is sent across 1 link and the other is sent across the other link. CEF is per source/destination or per packet Load balancing isn't about saving time its using multiple links efficiently what saves time is CEF instead of process switching. Rack1R6#sh ip cef 155.1.23.0 155.1.23.0/24, version 155, epoch 0, per-destination sharing 0 packets, 0 bytes via 155.1.146.4, FastEthernet0/0.146, 0 dependencies traffic share 1 next hop 155.1.146.4, FastEthernet0/0.146 valid adjacency via 155.1.146.1, FastEthernet0/0.146, 0 dependencies traffic share 1 next hop 155.1.146.1, FastEthernet0/0.146 valid adjacency 0 packets, 0 bytes switched through the prefix tmstats: external 0 packets, 0 bytes internal 0 packets, 0 bytes Rack1R6#sh ip route 155.1.23.0 Routing entry for 155.1.23.0/24 Known via "ospf 1", distance 110, metric 129, type intra area Redistributing via eigrp 100 Advertised by eigrp 100 metric 1 1 1 1 1 Last update from 155.1.146.4 on FastEthernet0/0.146, 1d12h ago Routing Descriptor Blocks: 155.1.146.4, from 150.1.2.2, 1d12h ago, via FastEthernet0/0.146 Route metric is 129, traffic share count is 1 * 155.1.146.1, from 150.1.2.2, 1d12h ago, via FastEthernet0/0.146 Route metric is 129, traffic share count is 1 Rack1R6(config-subif)#ip load-sharing ? per-destination Deterministic distribution per-packet Random distribution Rack1R6# sh ip cef 155.1.23.0 155.1.23.0/24, version 155, epoch 0, per-packet sharing 0 packets, 0 bytes via 155.1.146.4, FastEthernet0/0.146, 0 dependencies traffic share 1, current path next hop 155.1.146.4, FastEthernet0/0.146 valid adjacency via 155.1.146.1, FastEthernet0/0.146, 0 dependencies traffic share 1 next hop 155.1.146.1, FastEthernet0/0.146 valid adjacency 0 packets, 0 bytes switched through the prefix tmstats: external 0 packets, 0 bytes internal 0 packets, 0 bytes
adam-b wrote: » Many good points here and I do respectfully disagree with some of them. There are routing protocols that will inject multiple routes to the same destination even if the metrics vary; ie. EIGRP unequal cost load balancing (the variance command) Also, i think that load balancing CAN be about time, in that you are effectively increasing the bandwidth to a load balanced destination when doing so; so therefore when the links are maxed out you are moving more data quicker than if they were not load balanced. Correct me if im wrong on any of the above points. Thanks!
kryolla wrote: » I think I said thisload balancing via equal or unequal cost links involve 2 things and sure if you are experiencing congestion but that delay is not processing delay which is experience during process switching but network delay. So I guess it depends on what type of delay.
kryolla wrote: » point number 1 was made to explain that it is the routing protocol that puts multiple links in the routing table for that destination, I should of put unequal metric as well, my bad. BGP also does unequal cost load balancing
e24ohm wrote: » Not to jack your thread, but when implementing BGP in a production environment, what is it called when the ISP(s) bond a single IP address to your trunks so load-balancing takes place both ways, inbound and outbound? I am looking at doing this, but I forget the steps, what the vocabulary is called. thank you.
Compare salaries for top cybersecurity certifications. Free download for TechExams community.