But good job in spotting this, I bet most ppl, including myself, just glaze over it without really thinking it through
So it's pointless, as well as bad practice, to cap BE at a certain rate, especially such a low rate of 5%.
However, I believe that this slide is more meant to show CoS usage on the FE link between the two routers in which there is alot more bw available.
rossonieri#1 wrote: » why am i think the opposite? i see that the point using the *exact* keyword for the BE so that the BE wont take others BW allocation hence it doesnt overlapped with the other queue?.
rossonieri#1 wrote: » good point. now, i'm thinking - IF - that is IF, we use a 100mbps link, doesnt that 1mbps limit for the student will be pointless? or, this is the one that keeps me thinking too - are those the source/destination count as a global - say persubnet basis? - or actually it will affect perhost limitation? like /32 actually?
rossonieri#1 wrote: » this is what i've encountered in some occation : ping latency. from professors & students case study above, its a common problem - lets say the students like to ping their destination web site to test their connectivity. now, will the students ping go under the network-control queue - or the ping will be carried under the students queue?
rossonieri#1 wrote: » my analisys is that - IF we put the ping under network-control queue - there will be pretty obvious result that the students will have their ping latency *lower* compares to what if we put their ping under their own queue - simply just because the network-control has high-priority and *unlimited BW*? am i correct in intepreting this?
On a j router this means that the BE traffic would never use bw from other queues if other queues are using that bw, even if those other queues were out of profile. Hence there is no possible way BE traffic could starve out the other queues.
What do you mean by the "source/destination count as a gobal"? From the example it's any host that sends traffic to/from a certain subnet.
Yes if the pings go into the NC queue which normally has a high priority then the pings will have much lower latency. NC traffic is normally "caped" around 5-10% of bw but there isn't normally alot of NC traffic anyhow so anything that goes in here will have a very low latency.
i'm sorry, but seriously though - i thought only that *exact* keyword will make a strict queue? and, yes there are differences between M and J series queue priority transmission - and man, when i re-read the pages over and over, they just different .
ok, what do you think better to do to have a lower ping latency in this matter? do this using this queue priority scheduler? nor do some MPLS TE will be perfect tool to achieve the goal? hmm, i think it depends to the overlay network dont you think?
By strict queue I'm assuming that you mean that the bw is capped, right?
I would say that if you wanted to get low ping latency even though the traffic might be coming from the student subnet that might be prone to congestion would be to either throw the ping packets in the NC queue or create a new queue for this purpose.
Also, it would probably be best to only put these pings in the NC queue if they are coming from the student subnet, not to the student subnet, that way to avoide the possible dos attack.
I would be a little weary about implementing MPLS, might be a fun lab to try but that might be adding another layer of complextity that is unnecessary. But sure would be fun to play with
why am i prefer to do it for both direction - both from and to the students subnet? i mean, if we only put the echo-request on the NC queue without putting the echo-reply on that specific queue - will not it also create higher latency? since the reply will go under capped students BW?