interconnecting data centers

certkitcertkit Registered Users Posts: 5 ■□□□□□□□□□
Hi guys,

I've put the thread in CCDA, please feel free to move it if it's incorrect.

We have 2 data centers with about 30 servers on the each side. We're talking mostly about servers for virtualization and storage.
They are now interconnected with some old 3750 at the 1Gb Metro Ethernet fiber link.

As the need for speed grows, we'll be now getting the 10Gb fiber link. I'm wondering if you have any tips about what core devices would be good to consider.

Our requirements for 1 switch (same for the other side):
Connecting ~8 access/distribution devices (Cisco and HP), 2 10Gb ports (one of them is that uplink), interconnection link has its subnet, basic extended ACLs (both between vlans and "uplink <-> internal vlans"). No NAT is required.

Which design would be better:
1) Router as an interconnecting device (core) <-> distribution switches <-> access switches
2) Switch as an interconnecting device (core/distribution) <-> access switches

Would it be switch or router, what models would you recommend?
What I've seen on 10Gb router market is Cisco ASR 1002 or Juniper MX5 3D vs Cisco 3750, 4000, 6500 or HP 5400 (or similar) on the switch side.

I think that switch would be a better option as it can fulfill all our requirements, plus it's capable to forward packets faster on the hardware level (+ACLs on tcam) + it's more cost effective solution.

I'd really love to get those 6500 but I think my company doesn't like going for Cisco because of its cost.
Procurve looks interesting but the only thing I'm concerned about is that I can't find how enable the port for layer 3 operation, like "no switchport" on Cisco.

What would be your recommendations about this, any help, model suggestions, etc? Thanks!


  • Options
    networker050184networker050184 Mod Posts: 11,962 Mod
    I'd go with the MX myself. No worry about forwarding or ACLs with a single 10G link on there. If you need something larger like a 6500 look at the larger MX models.
    An expert is a man who has made all the mistakes which can be made.
  • Options
    malcyboodmalcybood Member Posts: 900 ■■■□□□□□□□
    For 30 servers at each data centre you would be looking at something like a Cisco 4900M with 10G x2 modules and / or copper 20 ports. You could then add server access switches if you require further server port density anything from the 10g version of the 2960S to a 3750X depending on specific requirements / traffic flow.

    The 4900M could be run as a collapsed core/distribution which I've seen designed and implemented on several small college campuses for example.

    A 4506E would also be sufficient if you're other services such as firewalls, SLB's etc are external appliances.

    In regard to different options, would suggest speaking to a Brocade account manager / sales team and exploring their data centre switching products.

    They're much cheaper than Cisco and actually beat the pants off Cisco in many areas for DC switching and things like FCoIP. The command line is more or less identical to Cisco also therefore no headache trying to find a GOOD engineer with HP or Juniper skill sets - A Cisco guy can work on Brocade no problem.

    They supply a study guide free so you could get Brocade certified for nothing and little skills transfer, have a look at the link below with study guides and do some research into their products and think you'll be pleasantly surprised.

    Brocade Certified Network Engineer (BCNE)

    Realise this is a Cisco forum but you did ask for other options! icon_smile.gif

    Also you could consider using Brocade as your core inter-DC switches and Cisco or HP / Juniper at the access layer.

    Some food for thought
  • Options
    certkitcertkit Registered Users Posts: 5 ■□□□□□□□□□
    Thanks for your advices guys!

    I'll take a look at Brocade, haven't considered it as an option. We tend to run multiple vendor equipment and are not so Cisco-minded.
    Oh and thanks for the link to the certification site, it sounds like a good technology to learn. The CLI is very similar to HP one as I see from BCNE Nutshell pdf.

    Any other advice is welcome!
  • Options
    dirtyharrydirtyharry Member Posts: 72 ■■□□□□□□□□
    I know this thread is a little old, but have you checked out the ASR's OTV (Overlay Transport Virtualization) technology? Also, if Nexus 7k's aren't out of the budget, they support OTV as well. Simply explained, OTV is like layer 2 routing. The mac address table of the (core) switch on both sides of the DC is advertised to the other(s), and frames are forwarded based on a mac to IP/interface table. It's really cool and easy to configure.

    One of the things that makes OTV great is that it will literally work over any layer 2 network that IP works on. All info is shoved into an IP packet.

    Edit: This is, of course, Cisco proprietary. I cannot speak to HP or Brocade solutions.
Sign In or Register to comment.