Gigabit to the Desktop

EildorEildor Member Posts: 444
I'm guessing it's a good idea to run gigabit to the desktop if you are doing a design from scratch; but is it a good idea to actually enable gigabit on the switch ports connecting to those desktops?

My concern is that if you have 48 workstations running gigabit connected to a single switch couldn't that potentially cause a bottleneck elsewhere? Say you have a uplinks of 2 gigabit links (EtherChannel), all it's going to take is two workstations starting a large file transfer to fully utilise that link. That means, 2 workstations have just taken up 2 gigabits of bandwidth of your server just like that. Isn't that a potential problem? Wouldn't limiting links to Fast Ethernet be safer? Especially when there's no particular need for gigabit speeds to the desktop in most cases.

Thanks.

Comments

  • networker050184networker050184 Mod Posts: 11,962 Mod
    I wouldn't worry about limiting them in that way. No matter what you are usually going to have a bottle neck in your network regardless.
    An expert is a man who has made all the mistakes which can be made.
  • ptilsenptilsen Member Posts: 2,835 ■■■■■■■■■■
    I agree with networker. Intentionally limiting desktop throughput is a bad way to try to avoid network bottlenecks.

    Realistically, if you are looking at largely client-server operations and can afford a disk system that can utilize more than GbE bandwidth, you can afford to have 10GbE connections between switches and servers. It's not that expensive anymore, and if you're in an environment where it will truly be used, it's worth it.

    If, as you say, there's no particular need for GbE at the desktop level, then it shouldn't be a problem anyway.
    Working B.S., Computer Science
    Complete: 55/120 credits SPAN 201, LIT 100, ETHS 200, AP Lang, MATH 120, WRIT 231, ICS 140, MATH 215, ECON 202, ECON 201, ICS 141, MATH 210, LING 111, ICS 240
    In progress: CLEP US GOV,
    Next up: MATH 211, ECON 352, ICS 340
  • EildorEildor Member Posts: 444
    I wouldn't worry about limiting them in that way. No matter what you are usually going to have a bottle neck in your network regardless.

    If you had them running at 100 megs it would take 20 devices doing large file transfers before the link would be fully utilised... which I guess is very unlikely.

    If you did it like you said wouldn't you also be more open to DOS attacks?
  • EildorEildor Member Posts: 444
    ptilsen wrote: »
    I agree with networker. Intentionally limiting desktop throughput is a bad way to try to avoid network bottlenecks.

    Realistically, if you are looking at largely client-server operations and can afford a disk system that can utilize more than GbE bandwidth, you can afford to have 10GbE connections between switches and servers. It's not that expensive anymore, and if you're in an environment where it will truly be used, it's worth it.

    If, as you say, there's no particular need for GbE at the desktop level, then it shouldn't be a problem anyway.

    So in your opinion you should enable gigabit regardless of whether you need it? Whilst still having say 2 gigabit uplinks and 10 gigabit to servers?
  • networker050184networker050184 Mod Posts: 11,962 Mod
    Sure you could be open to something, but what are the chances a desktop application actually gets up to a gig of utilization? Plenty other factors to consider here besides link speed as ptilsen pointed out.
    An expert is a man who has made all the mistakes which can be made.
  • ptilsenptilsen Member Posts: 2,835 ■■■■■■■■■■
    I have never, in production, found myself compelled to manually configure the link speed of a switch outside of troubleshooting a problem. If a gigabit switch was purchased, I leave the ports on gigabit.
    Working B.S., Computer Science
    Complete: 55/120 credits SPAN 201, LIT 100, ETHS 200, AP Lang, MATH 120, WRIT 231, ICS 140, MATH 215, ECON 202, ECON 201, ICS 141, MATH 210, LING 111, ICS 240
    In progress: CLEP US GOV,
    Next up: MATH 211, ECON 352, ICS 340
  • QHaloQHalo Member Posts: 1,488
    If they're going to move large files, they're going to do it whether the switchport is 100Mb or 1000Mb. Would you rather have them move a large file over 100Mb or 1000Mb? Think about that one for a minute and it should answer your question.
  • Geek1969Geek1969 Member Posts: 100 ■■□□□□□□□□
    I had these same thoughts before building our student infrastructure on the college campus where I work currently. 800-1000 students on a college campus network entirely gigabit. The only place I ever see anything remotely close to a bottleneck is when they are all streaming netflix or Pr0n over a 45 MB DS3 internet connection. I have 10 GB uplinks from dist to core, but single 1 GB uplinks from access to dist. Routing at the access layer switches helps us to limit broadcasts also. We are upgrading the internet pipe to 100 MB soon. I don't even hear any complaints the way it is now.
    WIP:
    ROUTE
  • EildorEildor Member Posts: 444
    QHalo wrote: »
    If they're going to move large files, they're going to do it whether the switchport is 100Mb or 1000Mb. Would you rather have them move a large file over 100Mb or 1000Mb? Think about that one for a minute and it should answer your question.

    I feel more comfortable limiting them to 100Mb to be honest, but I guess I'm wrong. Thanks for all your input! :) Much appreciated.
  • 7of97of9 Member Posts: 76 ■■■□□□□□□□
    WAN links or internet bandwidth are usually where bottlenecks occur. I've never worked anywhere where they've intentionally limited bandwidth to an access port. Now, at a ISP, we limit bandwidth on ports all the time. icon_twisted.gif
    Working on Security+ study, then going back to re-do my Cisco Certs, in between dodging moose and riding my Harley
  • PurpleITPurpleIT Member Posts: 327
    I am on a fairly powerful machine with fast drives moving data over a gigabit network and I typically can only get 20% network utilization on my machine due to bottlenecks elsewhere (not design flaws, just the nature of the beast). The only place I can really come close to max capacity on a gigabit port is on my servers that have stupid fast drives and even then I have multiple ports so I am still only closing in on 25% capacity of the actual bandwidth available.

    There are exceptions of course, but unless you have a lot of users who are moving files that are hundreds of megs non-stop you should be fine. Remember, most individual users are rather bursty with their traffic so it all averages out.
    WGU - BS IT: ND&M | Start Date: 12/1/12, End Date 5/7/2013
    What next, what next...
  • QHaloQHalo Member Posts: 1,488
    Eildor wrote: »
    I feel more comfortable limiting them to 100Mb to be honest, but I guess I'm wrong. Thanks for all your input! :) Much appreciated.

    Let me clarify it further. Theoretically (no overhead, perfect world, rainbows, unicorns, etc, etc) you're going to get 12.5MB/s on 100Mb and 125 MB/s on 1000Mb. So to move a 100MB file, that's going to take roughly 9sec to move that file on 100Mb. On 1000Mb you're going to move it in roughly 1 sec. Which would you rather they do it on so it results in less congestion?
  • ptilsenptilsen Member Posts: 2,835 ■■■■■■■■■■
    QHalo, although I agree with you, let me play devil's advocate.

    If you have a server with GbE and a storage system capable of 150MB/s sustained throughput, and you initiate a copy of a 2GB, it will completely saturate the server's connection and make all other operations slow to a crawl. If you had ten other users all working with smaller file operations, says 1-5MB each, who are used to those completing in less than a second, their experience would be dramatically impacted for 20 seconds or so.

    By comparison, if every user were limited to 100mbps or any other arbitrary amount, no single user could saturate the connection, which would guarantee at least some level of usable performance for everyone. In that specific example, it would ensure they all have the full performance they are used to.

    I'm just trying to explain where OP is coming from, and why he's not necessarily wrong. However, that example is extremely contrived, and realistically no client-side bandwidth limitations are going to result in overall improvement in that sort of environment over the long term. Overall, providing more bandwidth means transfers complete faster and have less impact on on other transfers.
    Working B.S., Computer Science
    Complete: 55/120 credits SPAN 201, LIT 100, ETHS 200, AP Lang, MATH 120, WRIT 231, ICS 140, MATH 215, ECON 202, ECON 201, ICS 141, MATH 210, LING 111, ICS 240
    In progress: CLEP US GOV,
    Next up: MATH 211, ECON 352, ICS 340
  • pitviperpitviper Member Posts: 1,376 ■■■■■■■□□□
    Jeez, we can’t get the funding to upgrade ALL of the distro switches to GigE and here you have the hardware but want to cripple it! I’ve personally never seen this done. As others have stated, actual speeds are nowhere near what’s advertised anyways.
    CCNP:Collaboration, CCNP:R&S, CCNA:S, CCNA:V, CCNA, CCENT
  • PurpleITPurpleIT Member Posts: 327
    ptilsen wrote: »
    QHalo, although I agree with you, let me play devil's advocate.

    I'm just trying to explain where OP is coming from, and why he's not necessarily wrong. However, that example is extremely contrived, and realistically no client-side bandwidth limitations are going to result in overall improvement in that sort of environment over the long term. Overall, providing more bandwidth means transfers complete faster and have less impact on on other transfers.


    I agree with what you say completely (in the theoretical sense of your post), but I think what this really leads to is the importance of the network people understanding what the system people and the users are really doing.

    If you have a group of users that are copying 2GB files back and forth all day not only do you have to look at how you design the network including the possibility of partitioning those users from a network perspective, but you have to get the systems guys to design their stuff so they can handle this workload without bogging down their machines.
    WGU - BS IT: ND&M | Start Date: 12/1/12, End Date 5/7/2013
    What next, what next...
  • 7of97of9 Member Posts: 76 ■■■□□□□□□□
    Instead of outright limiting the bandwidth on your access ports, why not instead prioritize traffic via QoS? Traffic shaping and policing will only take effect as congestion occurs, so you're only instituting it when you need it. In addition, you're allowing the high priority traffic from your users to have the highest priority rather than just cutting their traffic all across the board. Plus, you can place that shaping and/or policing where the bottlenecks are.

    This is what most modern networks do and is a more intelligent, sophisticated way to handle it than simply dumbing down a gig switch into a fastethernet switch. This discussion seems to be a "why QoS was invented" discussion in the making. ;)
    Working on Security+ study, then going back to re-do my Cisco Certs, in between dodging moose and riding my Harley
  • EildorEildor Member Posts: 444
    Apologies for disappearing. I have read through all comments, and I appreciate all of the help. I definitely need to read up on QoS.
  • 7of97of9 Member Posts: 76 ■■■□□□□□□□
    The original exam materials for ONT (the old CCNP exam that included QoS) were a good place to start. :) QoS is definitely nifty stuff.
    Working on Security+ study, then going back to re-do my Cisco Certs, in between dodging moose and riding my Harley
Sign In or Register to comment.