Nexus host prefixes routing BGP for docker infrastructure
saddayz
Member Posts: 29 ■□□□□□□□□□
[FONT="]Hello,
[/FONT]
[FONT="]We own DCwith OpenStack cloud platform. There’s a need for team responsible for cloudservice to get more performance, so one of possibilities would be to run aproject called CALICO (https://docs.projectcalico.org/v2.6/reference/private-cloud/l3-interconnect-fabric). This project requires connecting BGP to Network equipment. Approximately inevery NEXUS device there would be ~200 BGP sessions and ~7k host (/32) learnedvia BGP protocol. We’ve wide range of device models:[/FONT]
[FONT="]FromNexus5548 to Nexus9000. From the hardware limits perspective there’re limits.[/FONT]
[FONT="]For exampleNexus 5548 could support only 7200 dynamic routes. And number of BGP sessions.In some cases it would possible to work around this issue adding a box likequagga type. But this doesn’t sound good in network design case.[/FONT]
[FONT="]Also fromnetwork design case, it’s quite strange to run BGP protocol with such a numberof routes in pure DC equipment (talking about NX-OS devices). Maybe you havesome kind of observations about this setup ? Strange, that there’re to littleinfo about interconnecting Pure networking devices with virtual containers.[/FONT]
[FONT="]P.S alsothe in cisco specifications page there're two limits. Verifiedtopology and Maximum limits. How to know if the Maximum limits will beavailable for your setup ?[/FONT]
[FONT="] [/FONT]
[FONT="]Thanks[/FONT]
[/FONT]
[FONT="]We own DCwith OpenStack cloud platform. There’s a need for team responsible for cloudservice to get more performance, so one of possibilities would be to run aproject called CALICO (https://docs.projectcalico.org/v2.6/reference/private-cloud/l3-interconnect-fabric). This project requires connecting BGP to Network equipment. Approximately inevery NEXUS device there would be ~200 BGP sessions and ~7k host (/32) learnedvia BGP protocol. We’ve wide range of device models:[/FONT]
[FONT="]FromNexus5548 to Nexus9000. From the hardware limits perspective there’re limits.[/FONT]
[FONT="]For exampleNexus 5548 could support only 7200 dynamic routes. And number of BGP sessions.In some cases it would possible to work around this issue adding a box likequagga type. But this doesn’t sound good in network design case.[/FONT]
[FONT="]Also fromnetwork design case, it’s quite strange to run BGP protocol with such a numberof routes in pure DC equipment (talking about NX-OS devices). Maybe you havesome kind of observations about this setup ? Strange, that there’re to littleinfo about interconnecting Pure networking devices with virtual containers.[/FONT]
[FONT="]P.S alsothe in cisco specifications page there're two limits. Verifiedtopology and Maximum limits. How to know if the Maximum limits will beavailable for your setup ?[/FONT]
[FONT="] [/FONT]
[FONT="]Thanks[/FONT]
Comments
-
d4nz1g Member Posts: 464Maximum limit is usually the "theoretical maximum", considering the HW limitations (memory, ASIC performance, TCAM size, etc).
As i understand, each leaf switch (probably a N5K) will be a BGP rr for a given POD, is that right? If so, will you have >7k hosts in a single POD???
Also, as you are concentrating a certain pool of hosts in a given leaf switch, you should be able to do summarization between the leaf switches, only advertising the specifics when needed (like if a given vm/container needs to be placed somewhere else and such).
Anyway, I have little exposure in DC routing designs, but you should be able to overcome some limitations by using the regular tools BGP offer (RR, confeds, and etc). -
saddayz Member Posts: 29 ■□□□□□□□□□thanks for reply.
BTW now we are using traditional hierachical l3 DC model. And yes each TOR switch (Nexus) would be RR for every compute node. The problem is that all routes from compute nodes would be /32 because TOR Switch should be able to know exactly where hosts are residing. So in certain situations routing table of Nexus5K would be overwhelmed.