New Remote Location: Can only get a collpased core/distro layer
Hey guys,
So my work just opened up a new remote location and ownership wants to setup a DR site for VMware at the location. However they gave me a curveball. That being they said either get your servers or get your core layer switches so I'm like well **** I need at-least two servers and a SAN and they were like then you can only have two L3's and the three L2's for the remote network..... I was like ok....
I squeeze into the mix two 10G SR Fiber uplinks from the Cisco 3750E's which come with Stackwise at the collapsed core/distro into the three Cisco 2960G's at the access layer. My question for you guys, I'm tossed as to this design.
Design 1: Making the core/distro Area 0 and place the two ESXi server's 3 per server gigabit bonded EC connections on the access layer going over the twin bonded 10G's in a etherchannel back to the core for vlan routing and sending local remote LAN access back and forth over the etherchannel then through our firewall and out the 1921 router.
Design 2: Place the two ESXi server's 3 per server gigabit bonded EC connections on the core/distro layer and allow the LAN network to access the servers on the bonded connection but leaving the inter-switch routing for the ESXi server local to the core/distro and having only switching taking place across the etherchannel for the local LAN network.
Obviously both of these would work, I just need to consider routing traffic over the twin 10G pipes, I'm curious if anyone here ever did something like this and would care to share there outcome with using the collapsed core in this manner. It's not how I want to do it but I need to be flexible it seems to costs...
So my work just opened up a new remote location and ownership wants to setup a DR site for VMware at the location. However they gave me a curveball. That being they said either get your servers or get your core layer switches so I'm like well **** I need at-least two servers and a SAN and they were like then you can only have two L3's and the three L2's for the remote network..... I was like ok....
I squeeze into the mix two 10G SR Fiber uplinks from the Cisco 3750E's which come with Stackwise at the collapsed core/distro into the three Cisco 2960G's at the access layer. My question for you guys, I'm tossed as to this design.
Design 1: Making the core/distro Area 0 and place the two ESXi server's 3 per server gigabit bonded EC connections on the access layer going over the twin bonded 10G's in a etherchannel back to the core for vlan routing and sending local remote LAN access back and forth over the etherchannel then through our firewall and out the 1921 router.
Design 2: Place the two ESXi server's 3 per server gigabit bonded EC connections on the core/distro layer and allow the LAN network to access the servers on the bonded connection but leaving the inter-switch routing for the ESXi server local to the core/distro and having only switching taking place across the etherchannel for the local LAN network.
Obviously both of these would work, I just need to consider routing traffic over the twin 10G pipes, I'm curious if anyone here ever did something like this and would care to share there outcome with using the collapsed core in this manner. It's not how I want to do it but I need to be flexible it seems to costs...
Comments
-
philz1982 Member Posts: 978Well, could you get core layer switchs that run on a converged solution that includes servers??? For example the VCE Block System 340 (which is obviously overkill in this situation) contains switches, storage, compute, all in one package.Read my blog @ www.buildingautomationmonthly.com
Connect with me on LinkedIn @ https://www.linkedin.com/in/phillipzito -
Dieg0M Member Posts: 861Could you draw us a diagram for design 1 and design 2 and include the edge router + firewall in there? That would help me understand what you are saying as it is not clear.Follow my CCDE journey at www.routingnull0.com
-
Deathmage Banned Posts: 2,496Could you draw us a diagram for design 1 and design 2 and include the edge router + firewall in there? That would help me understand what you are saying as it is not clear.
Here you, quick one in packet tracer....
Design 1: ESXi server at access layer.
Design 2: ESXi servers at core/distro layer.
Oops forgot firewall, firewall is in-between the router and the cloud.Well, could you get core layer switchs that run on a converged solution that includes servers??? For example the VCE Block System 340 (which is obviously overkill in this situation) contains switches, storage, compute, all in one package.
Have keep my costs below $35k, it's 17k alone for the SAN. ...reducing some of my costs by doing OSPFv2 over VPN with the firewall.
If it comes down to it I'll just get Dell N Series L3's and L2's to save some more money but I kind of want Cisco But the kicker is VMware licensing.
The all mighty dollar lol! -
Legacy User Unregistered / Not Logged In Posts: 0 ■□□□□□□□□□
If it comes down to it I'll just get Dell N Series L3's and L2's to save some more money but I kind of want Cisco
Not sure if you ever worked on the Dell N Series but the cli is VERY similar to cisco switches. -
Dieg0M Member Posts: 861Why do you need to connect the servers at the core/dist layer in Design 2? What are the design goals that you are trying to achieve?Follow my CCDE journey at www.routingnull0.com
-
Deathmage Banned Posts: 2,496Why do you need to connect the servers at the core/dist layer in Design 2? What are the design goals that you are trying to achieve?
Well my thoughts are this: since the access layer at this location is doing a ton of wireless data transfer I mean 45 AP's for a warehouse. My thought are too keep the routing traffic of the wireless vlan going over the EC trunks and keeping the ESXi traffic in it's vlan but never having the data from the ESXi server going over the EC trunks since it will be doing 30 minute RTPO's from our datacenter at my current location.
Essentially wanting to keeping the EC trunks from being a potential bottleneck for the VMware SRM cluster from LAN-based burst congestion that could happen from local LAN traffic from the on-site Design/CAD and Conveyor systems and the compounding effects of all those wireless AP's.
I told them it would be easier if I purchased a wireless controller but they don't want to invest in one...
See the site currently had a L3 to L2 layer, over a max'd Cat5e bond of 8 1Gb connections but since CAD pumps out a ton of online content I need to not have that impact the ESXi cluster they want implemented. The hope is with dual 10Gb EC trunks this could be relaxed just a tad.
I'm managing 3 locations for my job and the networks keep getting more and more complex. -
Dieg0M Member Posts: 861Did you do a bandwidth analysis to see if you actually have traffic congestion? Do you think QoS will help? From a pure layering perspective you wouldn't want to connect servers at the core/dist layer to avoid a congestion problem at the access layer. This said, each network is unique but I would try to avoid fixing a problem by introducing another one.Follow my CCDE journey at www.routingnull0.com
-
Deathmage Banned Posts: 2,496I could give QoS a go and see if that helps. Don't use Qos much, will need to google that one. That's probably a CCNP topic.
CCDA might be something for me to do after CCNA.Not sure if you ever worked on the Dell N Series but the cli is VERY similar to cisco switches.
Core here at the main location is Dell N4000's, Distro is N3000's, and Access layer is N2000's. The CLi is indeed very much alike as it uses PAgP and not just LACP. Plus PVST+, EIGRP and CDP is standard on them, there very lovely devices.
CDP come in handy on the VMware cluster with Netflow. -
Dieg0M Member Posts: 861Just remember that QoS will not fix your problem but only alleviate the symptoms. If you have a congestion problem, get more bandwidth on those links.Follow my CCDE journey at www.routingnull0.com
-
networker050184 Mod Posts: 11,962 ModDid you do a bandwidth analysis to see if you actually have traffic congestion? Do you think QoS will help? From a pure layering perspective you wouldn't want to connect servers at the core/dist layer to avoid a congestion problem at the access layer. This said, each network is unique but I would try to avoid fixing a problem by introducing another one.
Bingo. Sounds like you're trying to throw a Cisco CCNA book at it rather than sit down and come up with a design that fits your actual traffic needs. Design a network to fit the needs, not force the needs into a network.An expert is a man who has made all the mistakes which can be made. -
Deathmage Banned Posts: 2,496you all know me so well....I must hide in my bunker for a few decades now to formulate a new strategy
Will give the thoughts with the layered design and go from there...
Still have much to learn. -
Deathmage Banned Posts: 2,496So gsve this question some thought. I guess the core of my question is this: with a dual 10g fiber EC at one point will I need to start worrying about bandwidth issues?
I'm deferring to the higher knoweledgeable guy to this situation.
Would it be advised I monitor the EC with netflow and or monitor it will wireshark going over the pipe. Just thoughts, but what would be a course of action. -
stlsmoore Member Posts: 515 ■■■□□□□□□□Any reason why you're wanting to connect the access layer switches directly to each other instead of having each access layer switch connect to both of the core/distro switches? Similar to this picture:
My Cisco Blog Adventure: http://shawnmoorecisco.blogspot.com/
Don't Forget to Add me on LinkedIn!
https://www.linkedin.com/in/shawnrmoore