MPLS Circuit Latency

cisco_troopercisco_trooper Member Posts: 1,441 ■■■■□□□□□□
I'm starting to look at private circuits to establish connectivity between some new locations. Due to the application traffic that will be traversing these circuits I need an extremely low latency solution. I have yet to deploy an MPLS circuit so I'm curious about the latency on these types of circuits. What experiences do you all have with latency on MPLS circuits?

Comments

  • dtlokeedtlokee Member Posts: 2,378 ■■■■□□□□□□
    What do you mean by "extremely low latency"? Any circuit traversing from the east coast to the west coast will have an inherent 30-40ms propagation delay simply due to the distances involved. On more localized installations (less than 100miles) you can expect to see 12-30ms propagation delay plus whatever the serialization delay is based on the speed of the link.
    The only easy day was yesterday!
  • cisco_troopercisco_trooper Member Posts: 1,441 ■■■■□□□□□□
    I'm looking at 20 - 30 miles maximum distance depending on the colo facilities I end up choosing. We've got some proprietary applications that don't function well when we see even small amounts of latency. Right now these applications are all hosted on site but we've outgrown the building and are looking to seperate ourselves from the building so we can put end users anywhere we want. My biggest fear with all of this is introducing latency that we've not seen before. I'm honestly leaning toward a layer 1 technology such as a GigaMAN to minimize propagation delay. I'm not sure what kind of serialiation delay I'll see on something like this over such a short distance. I can't image the ISP will need too many devices along the path of a GigaMAN to get my traffic where it needs to go. I honestly think 30ms will be too high. We're used to never getting over 2ms RTT on the LAN. The LAN is over-provisioned. The last thing I want to happen is have CIFS access throughput per TCP stream cut down ten fold.
  • shodownshodown Member Posts: 2,271
    Have you looked into WAN acceleration, like Riverbeds, WAAS modules? Those can possibly help with your latency problem when you do move to a colo.
    Currently Reading

    CUCM SRND 9x/10, UCCX SRND 10x, QOS SRND, SIP Trunking Guide, anything contact center related
  • dead_p00ldead_p00l Member Posts: 136
    Will the circuit be traversing multiple providers or just one provider. We recently built out a large EoMPLS network for a customer with multiple sites above the 30 mile mark and were able to keep latency to around 10ms. Even the circuits where we had to involve a second provider were under 20ms.
    This is our world now... the world of the electron and the switch, the
    beauty of the baud.
  • cxzar20cxzar20 Member Posts: 168
    shodown wrote: »
    Have you looked into WAN acceleration, like Riverbeds, WAAS modules? Those can possibly help with your latency problem when you do move to a colo.

    From my experience it is EXTREMELY important to have your DNS system setup perfectly otherwise WAN Optimization will cause havoc.
  • cisco_troopercisco_trooper Member Posts: 1,441 ■■■■□□□□□□
    dead_p00l wrote: »
    Will the circuit be traversing multiple providers or just one provider. We recently built out a large EoMPLS network for a customer with multiple sites above the 30 mile mark and were able to keep latency to around 10ms. Even the circuits where we had to involve a second provider were under 20ms.

    There will be two circuits, each from a different provider with diverse entry points into the impacted facilities. I believe 10ms would be acceptable based on the throughput calculations I have performed and the highest known single TCP stream throughput requirement. I've put out an RFP for the provider to indicate the latency guarantees they can make.
    shodown wrote: »
    Have you looked into WAN acceleration, like Riverbeds, WAAS modules? Those can possibly help with your latency problem when you do move to a colo.

    Not yet. I have some F5 GTM and LTM products but haven't looked at the WAN Optimization modules yet. Raw bandwidth itself will not be an issue. I'm only concerned with latency at this point and I'm not sure WAN Optimization and acceleration can address those needs. I'm definitely not intimately familiar with those technologies because I've yet to have a need. I'll definitely be looking into this more to see if there is anything out there that I can benefit from implementing.
  • DPGDPG Member Posts: 780 ■■■■■□□□□□
    It will really depend on how you are routed through the ISP's MPLS network.

    I have connections from Houston to Dallas that are around 7ms but others that are inter city that are much higher (25ms).
  • vinbuckvinbuck Member Posts: 785 ■■■■□□□□□□
    There will be two circuits, each from a different provider with diverse entry points into the impacted facilities. I believe 10ms would be acceptable based on the throughput calculations I have performed and the highest known single TCP stream throughput requirement. I've put out an RFP for the provider to indicate the latency guarantees they can make.


    Not yet. I have some F5 GTM and LTM products but haven't looked at the WAN Optimization modules yet. Raw bandwidth itself will not be an issue. I'm only concerned with latency at this point and I'm not sure WAN Optimization and acceleration can address those needs. I'm definitely not intimately familiar with those technologies because I've yet to have a need. I'll definitely be looking into this more to see if there is anything out there that I can benefit from implementing.

    The biggest difference is going to be the transport medium underneath the Ethernet (i'm assuming you're talking about ethernet). Is it going to be native ethernet, sonet, etc? On native ethernet, you should easily be sub 10 ms for the distances you're talking about - if it is interworked through sonet or some other TDM long haul circuit, then you may be looking at 10-15 ms due more to the interworking than the distance.

    Interworking Layer 2 technologies is where most of your longer latency comes in because many long haul routes for the big boys are still ATM or SONET. Distance comes into play but it's distance + amount of interworking that really determines the provider side latency. This is all assuming proper Traffic Engineering at Layer 3 and no suboptimal routing.
    Cisco was my first networking love, but my "other" router is a Mikrotik...
Sign In or Register to comment.