Options

[Work Advice] Network Renovation: Collapsed Core

mapletunemapletune Member Posts: 316
Hi all~

Our company has pretty much outgrown it's original network design and is looking to renovate it. At the moment, its 4~6 subnets are connected together by a FreeBSD router... This doesn't give us much flexibility in either L2 or L3...

I'm hoping to introduce a collapse core design (with either HP IRF or VRRP) for use in our headquarters. However, I've run into some problems concerning switch research and implementation. Specifically: I didn't have time to look for a Core switch and instead was forced to use a lite L3 Switch HP A5120 (JE066A). This didn't work out as it can only save up to 1024 entries in it's ARP table.

Some specifics:
- We have 4~6 subnets, all using core as gateway.
- Around 2000+ devices (one of the subnets has 1500+ devices...)
- Apart from the main rack, we have around 5~6 other locations/wiring closets.
- We will need some VLANs to span all these locations. So L2 at the Core is necessary.

We expect to grow to this:
- 10~20+ VLANS all using core as gateway
- Around 4000 devices. (plan to limit each subnet to 512 devices)

My concerns are:
- Should I worry that the core switch will be exposed to all VLAN broadcasts and has to route all IP traffic?
- Even though it wasn't me who wanted to use the lite L3 switch, I was unaware of ARP limitations until actually testing it out. Is there any other important switch specs one should be paying attention when designing/implementing?
- I haven't even gathered statistical data concerning current usage (broadcast pps, avg inter-VLAN traffic, etc). Would committing to a switch without this data be an unacceptable risk?

To be honest, I only have CCNA level knowledge and there are definitely more qualified people out there that can design and put this together better... But at the moment, our company only has me, so I'll try my best.

Any help and advice to point me at a right direction would be immensely appreciated~!! =D

Grateful~
- Mike
Studying: vmware, CompTIA Linux+, Storage+ or EMCISA
Future: CCNP, CCIE

Comments

  • Options
    santaownssantaowns Member Posts: 366
    I'll answer very quickly for now. We use hp irf at a number of branches, but in the actual data center we collapsed our core using nexus from Cisco. Each cabinet has its own nexus 2k which connects to two nexus 7k.
  • Options
    VAHokie56VAHokie56 Member Posts: 783
    what is your budget ?
    .ιlι..ιlι.
    CISCO
    "A flute without holes, is not a flute. A donut without a hole, is a Danish" - Ty Webb
    Reading:NX-OS and Cisco Nexus Switching: Next-Generation Data Center Architectures
  • Options
    it_consultantit_consultant Member Posts: 1,903
    OK...the HP you bought was hopelessly underpowered. Expect to get into the thousands of dollars for a collapsed core and remember, since it is doing both your routing and high speed bridging, you should have two of them. The best way to do go about doing that is to use a stacking technology, fabric, or multichassis trunking.

    I think I know your budget and you could get by with 2x Brocade ICX 6610 (or equivalent HP, but BRCD is what I know) in a hitless failover stack. It can handle 32,000 arp entries which is over your need.

    If you have money I would go with 2x ICX 6650s in a multi-chassis trunk with 40GB uplinks to your lower aggregation and distribution switches.

    As far as collapsed core vs traditional, with switching power nowadays, I am always surprised when someone isn't running a collapsed core topology.
  • Options
    mapletunemapletune Member Posts: 316
    Hi all~ Thanks for the replies guys =D really thankful!

    To be honest, I am completely clueless as to our budget, I'm a junior employee so I can only advise. The way it kind of works is that my manager and the company is always looking for the cheapest solution that meets our minimum requirements.

    In any case, I'm guessing our budget to be around 10,000 ~ 20,000 .

    2x Brocade ICX 6610 sounds like a really good option. Though lack of Brocade presence here may be a problem.

    We are also looking at HP 5500 EI switches at the moment.

    What I'm going to do this week, is to use either SolarWinds, Nagios, Zabbix to gather data on our existing infrastructure to get an idea of the actual traffic and bandwidth needs. That should help us be more confident when we make our decision.

    I'll update the thread to let you guys know how it turns out! =)
    Studying: vmware, CompTIA Linux+, Storage+ or EMCISA
    Future: CCNP, CCIE
Sign In or Register to comment.