Thoughts: New network design
Deathmage
Banned Posts: 2,496
in CCNA & CCENT
Hey all,
So my new job has only Dell 2824 switches as the network fabric, they deploy about 6 of them, 4 in the server room and 2 as IDF's.
I've been going over the switches the past couple of days and I think the network needs to be upgraded since they are getting latency, bandwidth, and transfer speed issues.
Right now it's all one giant vlan and I'd like to segment the network quite a bit. Namely make servers, printers, and wireless be on there own vlans since right now the majority of all of the traffic is server-to-server traffic from the Terminal Servers to the SQL/File server with some desktop traffic.
Right now I'm going with two options: HP and Cisco.
HP:
Looking into the HP 2920 48G switches, will need to get three of them for the server room built into a stack and two of the 24G switches for the IDF locations. one of the 48G's will be the L3 switch (assuming I can designate one of the switches in the stack as the L3 master) and the other 2 will just be glorified L2 switches even though they can do L3. I like the HP CLi quite a bit and at the price range they offer it's pretty feature rich.
Cisco:
Looking into a stack of (3) Cisco 3850 48G L2 switches in the server room with (2) Cisco 3850 24G L2 switches for the IDF locations connection to a backbone Cisco 6800ia switch as the L3 located in the server room. We have a need in the future for wireless so the fact that the 3850's have built-in controller is pretty sweet.
My question for you guys, is there anything I should look for in-particular to the re-design of this network that I might be overlooking. I will obviously be doing tons of testing before I move over the networks. Just curious if you guys have good/bad experiences with the above switches and know of anything I should look out for. I have a strong understanding of what I want and know needs to be done but I'm sure I will forget something so any pointers would be appreciated. - this would be my 1st very large-scale network re-design.
So my new job has only Dell 2824 switches as the network fabric, they deploy about 6 of them, 4 in the server room and 2 as IDF's.
I've been going over the switches the past couple of days and I think the network needs to be upgraded since they are getting latency, bandwidth, and transfer speed issues.
Right now it's all one giant vlan and I'd like to segment the network quite a bit. Namely make servers, printers, and wireless be on there own vlans since right now the majority of all of the traffic is server-to-server traffic from the Terminal Servers to the SQL/File server with some desktop traffic.
Right now I'm going with two options: HP and Cisco.
HP:
Looking into the HP 2920 48G switches, will need to get three of them for the server room built into a stack and two of the 24G switches for the IDF locations. one of the 48G's will be the L3 switch (assuming I can designate one of the switches in the stack as the L3 master) and the other 2 will just be glorified L2 switches even though they can do L3. I like the HP CLi quite a bit and at the price range they offer it's pretty feature rich.
Cisco:
Looking into a stack of (3) Cisco 3850 48G L2 switches in the server room with (2) Cisco 3850 24G L2 switches for the IDF locations connection to a backbone Cisco 6800ia switch as the L3 located in the server room. We have a need in the future for wireless so the fact that the 3850's have built-in controller is pretty sweet.
My question for you guys, is there anything I should look for in-particular to the re-design of this network that I might be overlooking. I will obviously be doing tons of testing before I move over the networks. Just curious if you guys have good/bad experiences with the above switches and know of anything I should look out for. I have a strong understanding of what I want and know needs to be done but I'm sure I will forget something so any pointers would be appreciated. - this would be my 1st very large-scale network re-design.
Comments
-
Hondabuff Member Posts: 667 ■■■□□□□□□□The 3850's that have the built in controller will drop the amount of traffic due to the CAPWAP tunnel terminating at the switch and not traversing the network to the controller. Works great on a campus design and for not going across the VPN from branch offices. I still like having a single VM controller though. The theory is that the MAX hosts on a VLAN should not exceed 500 devices but depends on the devices. Usually see the subnets broken up to a /23 design. You might only be able to get 300 hosts before things get saturated. Are you more comfortable with CLI or GUI for your administration? Are you currently using Tacacs for authentication or local only. What kind of backbone is connecting the switches? What size GBIC's are you going with if its multimode fiber. Have you looked at Enterasys over HP yet? I'm kind of a Cisco snob and work in a full Cisco shop. I have worked with a few other brands and sometimes get stuck doing a configuration when I know the Cisco way of doing it. When you run into a question about a configuration there is a lot of help out there for support. Some thoughts for you.“The problem with quotes on the Internet is that you can’t always be sure of their authenticity.” ~Abraham Lincoln
-
networker050184 Mod Posts: 11,962 ModI find the design methodology of a lot of people flawed. Not you particularly, but I see most networking professionals trying to design a network that the applications can fit onto rather than the other way around. We just had an entire meeting of engineers pitching redesign ideas, new DC fabrics, etc yet no one has talked to the systems guys once. Will the network you are designing help them do their job better or are you looking for a cool new network that you want to play with?
Ask these (and more) questions. What are the applications? What do they need? How much bandwidth? Redundancy methods? Acceptable down time in failure scenarios? What needs to talk to what? What are the security concerns when they do talk? What are current network limitations that could help productivity if over come?
Gather all of that info and THEN design a network to fit that. Present an design to management that actually solves problems. That is how you get buy in, budget etc.
Good luck!An expert is a man who has made all the mistakes which can be made. -
Reibe Member Posts: 56 ■■□□□□□□□□Why are you trying to go the new network route? The Dell switches are gigabit and support VLANs, why not adjust what you have now in order to make it better. As far as budget allows, even the limited amount of equipment in your designs is going to add up quicker than you may think and more than management wants.
From your setup it sounds like it isn't necessarily a network hardware issue and more of a design/configuration problem.
My Suggestions:
- Implement VLANs to divide traffic.
- Make sure none of your endpoints got stuck as 10 Mbps / half-duplex.
- Setup basic QoS if necessary. (Wouldn't worry about it unless for voice traffic.)
- Analyze traffic to see if you performance issues are actually the network and not something else (like server hardware or whatnot).
- Fix your network and be seen as "the IT guy who saves us money instead of forcing $20K+ in unexpected expenditures".
Also, I'm not sure how much experience you have as far as network implementations go, but whatever you choose; just be sure not to bite off something you can't chew.
*Edited for Clarity* -
Deathmage Banned Posts: 2,496Thanks guys for the feedback. Will go over this more throughly tomorrow.
The main reason for the network upgrade is sizing. All the switches are at capacity and no ports to grow And the 24 port switches only have a throughput of 48 Gbits per switch.
The GUI cli isn't the more due able with current java and crashes. I really prefer fully managed CLI, instead of trusting a GUI.
I don't necessarily want to buy new switches but with the plans to grow out the VMware cluster and the end user craving for performance while keeping the costs low I really think having a core switching fabric will help. Right now it's one giant L2 /24 on (6) 24 port 2824 switches.
Like I have plan's for idrac, management, printer, wireless, and server vlans and I'd want switching fabric that can sustain performance, growth and scalability while keeping a low TCO.
As far as the systems/network goes my role is systems/network/VMware. So I'm looking at it from many angles. Being that I'm in the tail end of my ccent studies I got a good grasp on what needs to be done. Just want to plan the crap out of it and do it correctly.