ask : CoS bandwidth allocation
rossonieri#1
Member Posts: 799 ■■■□□□□□□□
in Juniper
hi guys,
i've read the JNCIS-ER material on chapter 9-32 about the student/professor bandwidth allocation scenario, and i'm a bit confuse - so may i use a big help here?
on the right side of the slide, my understanding is that :
supposed between r1 and r2 having a T1 link (1,5mbps) - the breakdown will be
network-control 5%
professors 50% may use leftover BW
students 40% may use leftover BW = approx. 600kbps
and the Best Efforts gets 5% fixed BW - no leftover. = 75kbps
total 100%
on the left side of the slide,
the requirement is to put the students traffic no more than 1mbps - if it does then put it under BE.
what i dont get is :
if the professors only utilize 10% of their BW allocation, the students still get 80% of the BW hence total 1200kbps = 200kbps excess.
the BE maximum is 75kbps, where will the 125kbps go?
or the students will never get the professors BW allocation? -> i dont think so, because i think its the reason why the requirement state *may use leftover BW*
or am i missing something here?
any idea would be appreciated,
thank you.
i've read the JNCIS-ER material on chapter 9-32 about the student/professor bandwidth allocation scenario, and i'm a bit confuse - so may i use a big help here?
on the right side of the slide, my understanding is that :
supposed between r1 and r2 having a T1 link (1,5mbps) - the breakdown will be
network-control 5%
professors 50% may use leftover BW
students 40% may use leftover BW = approx. 600kbps
and the Best Efforts gets 5% fixed BW - no leftover. = 75kbps
total 100%
on the left side of the slide,
the requirement is to put the students traffic no more than 1mbps - if it does then put it under BE.
what i dont get is :
if the professors only utilize 10% of their BW allocation, the students still get 80% of the BW hence total 1200kbps = 200kbps excess.
the BE maximum is 75kbps, where will the 125kbps go?
or the students will never get the professors BW allocation? -> i dont think so, because i think its the reason why the requirement state *may use leftover BW*
or am i missing something here?
any idea would be appreciated,
thank you.
the More I know, that is more and More I dont know.
Comments
-
rossonieri#1 Member Posts: 799 ■■■□□□□□□□ok, after reading the material pages backward - i think i have found the answer on pages 9-23 : Buffer size and RED (random early detection).
so, after that 75kbps BE maximum limit (5% fixed from the total BW) - the students 125kbps limit excess will be silently dropped by the RED mechanism.
need more reading on adjusting the RED parameter (custom drop profile).
or, am i still missing something here?the More I know, that is more and More I dont know. -
Aldur Member Posts: 1,460Very keen observation there, the math doesn't exactly work out with the T1 links. In my opinion the example they use in the slides is a bad one. I have never seen anybody anywhere cap BE traffic. BE traffic is the traffic that you just don't really care about and so it gets serviced last. With it being so low priority BE traffic only uses link bw when no other traffic is on the line. So it's pointless, as well as bad practice, to cap BE at a certain rate, especially such a low rate of 5%.
RED might handle the excess traffic like you said but RED only comes into play with the entire link reaches congestion, not just when one queue reaches congestion. And currently with no drop profiles configured the RED algorithm is running at 100% fill/100% drop, so basically only tail drops would be happening when the interface is saturated.
However, I believe that this slide is more meant to show CoS usage on the FE link between the two routers in which there is alot more bw available.
But good job in spotting this, I bet most ppl, including myself, just glaze over it without really thinking it through"Bribe is such an ugly word. I prefer extortion. The X makes it sound cool."
-Bender -
rossonieri#1 Member Posts: 799 ■■■□□□□□□□aldur,But good job in spotting this, I bet most ppl, including myself, just glaze over it without really thinking it through
hahaha, looks like you've study this section very well
while i'm on the opposite - that 100 pages chapter looks like a thick books
aldur,So it's pointless, as well as bad practice, to cap BE at a certain rate, especially such a low rate of 5%.
why am i think the opposite?
i see that the point using the *exact* keyword for the BE so that the BE wont take others BW allocation hence it doesnt overlapped with the other queue?.
i'm still studying the buffer size corelation with the RED though.However, I believe that this slide is more meant to show CoS usage on the FE link between the two routers in which there is alot more bw available.
good point.
now, i'm thinking - IF - that is IF, we use a 100mbps link, doesnt that 1mbps limit for the student will be pointless? or, this is the one that keeps me thinking too - are those the source/destination count as a global - say persubnet basis? - or actually it will affect perhost limitation? like /32 actually?
this is what i've encountered in some occation : ping latency.
from professors & students case study above, its a common problem - lets say the students like to ping their destination web site to test their connectivity. now, will the students ping go under the network-control queue - or the ping will be carried under the students queue?
my analisys is that - IF we put the ping under network-control queue - there will be pretty obvious result that the students will have their ping latency *lower* compares to what if we put their ping under their own queue - simply just because the network-control has high-priority and *unlimited BW*? am i correct in intepreting this?the More I know, that is more and More I dont know. -
Aldur Member Posts: 1,460rossonieri#1 wrote: »why am i think the opposite?
i see that the point using the *exact* keyword for the BE so that the BE wont take others BW allocation hence it doesnt overlapped with the other queue?.
Say you have 4 forwarding classes, each mapped to a queue on a 1-1 basis with the following priority settings for the schedulers without the exact keyword on the BE scheduler.
EF - High
AF - medium-high
NC - High
BE - low
On a j router this means that the BE traffic would never use bw from other queues if other queues are using that bw, even if those other queues were out of profile. Hence there is no possible way BE traffic could starve out the other queues.
Take the same scenario and put it on a M or T router and the results differ slightly. It does matter know if the queues go out of profile but not as much as you may think. BE can only start using bw over other queues if they are out of profile, using more bandwidth then they are configured for, and not before they are out of profile. Sooo even at this point it's very unlikely, pretty much impossible, that BE traffic could starve out the other queues.
Now the higher priority queues definitely have a stronge tendency to starve the lower priority queues and some sort of shaping, through the use of exact, policers, and/or shaping-rates.rossonieri#1 wrote: »good point.
now, i'm thinking - IF - that is IF, we use a 100mbps link, doesnt that 1mbps limit for the student will be pointless? or, this is the one that keeps me thinking too - are those the source/destination count as a global - say persubnet basis? - or actually it will affect perhost limitation? like /32 actually?
Heh, well, now that I think about it my point sucks, the CoS would be pointless since we are only talking about 1m+ of traffic.
What do you mean by the "source/destination count as a gobal"? From the example it's any host that sends traffic to/from a certain subnet.rossonieri#1 wrote: »this is what i've encountered in some occation : ping latency.
from professors & students case study above, its a common problem - lets say the students like to ping their destination web site to test their connectivity. now, will the students ping go under the network-control queue - or the ping will be carried under the students queue?
If the ping is sourced from the students subnet then it will be placed in the students queue and not the NC queue.rossonieri#1 wrote: »my analisys is that - IF we put the ping under network-control queue - there will be pretty obvious result that the students will have their ping latency *lower* compares to what if we put their ping under their own queue - simply just because the network-control has high-priority and *unlimited BW*? am i correct in intepreting this?
Yes if the pings go into the NC queue which normally has a high priority then the pings will have much lower latency. NC traffic is normally "caped" around 5-10% of bw but there isn't normally alot of NC traffic anyhow so anything that goes in here will have a very low latency."Bribe is such an ugly word. I prefer extortion. The X makes it sound cool."
-Bender -
rossonieri#1 Member Posts: 799 ■■■□□□□□□□hi aldur,
thanks for the explanation btwOn a j router this means that the BE traffic would never use bw from other queues if other queues are using that bw, even if those other queues were out of profile. Hence there is no possible way BE traffic could starve out the other queues.
i'm sorry, but seriously though - i thought only that *exact* keyword will make a strict queue? and, yes there are differences between M and J series queue priority transmission - and man, when i re-read the pages over and over, they just different .What do you mean by the "source/destination count as a gobal"? From the example it's any host that sends traffic to/from a certain subnet.
ya, that was i thought - my bad, sorry
being count as global traffic i meant all the students accumulated traffic will go under the student subnet.Yes if the pings go into the NC queue which normally has a high priority then the pings will have much lower latency. NC traffic is normally "caped" around 5-10% of bw but there isn't normally alot of NC traffic anyhow so anything that goes in here will have a very low latency.
ok, what do you think better to do to have a lower ping latency in this matter? do this using this queue priority scheduler? nor do some MPLS TE will be perfect tool to achieve the goal?
hmm, i think it depends to the overlay network dont you think?the More I know, that is more and More I dont know. -
Aldur Member Posts: 1,460i'm sorry, but seriously though - i thought only that *exact* keyword will make a strict queue? and, yes there are differences between M and J series queue priority transmission - and man, when i re-read the pages over and over, they just different .
Just remember that queues of higher priority will take precedence over queues with lower priority. This is especially true with j series routers, silly J's, why can't they just act like the other Juniper routersok, what do you think better to do to have a lower ping latency in this matter? do this using this queue priority scheduler? nor do some MPLS TE will be perfect tool to achieve the goal?
hmm, i think it depends to the overlay network dont you think?
I would be a little weary about implementing MPLS, might be a fun lab to try but that might be adding another layer of complextity that is unnecessary. But sure would be fun to play with"Bribe is such an ugly word. I prefer extortion. The X makes it sound cool."
-Bender -
rossonieri#1 Member Posts: 799 ■■■□□□□□□□hi aldur,By strict queue I'm assuming that you mean that the bw is capped, right?
yup.I would say that if you wanted to get low ping latency even though the traffic might be coming from the student subnet that might be prone to congestion would be to either throw the ping packets in the NC queue or create a new queue for this purpose.
ya, that probably work the best - to put the pings under their own queue.
and this :Also, it would probably be best to only put these pings in the NC queue if they are coming from the student subnet, not to the student subnet, that way to avoide the possible dos attack.
why am i prefer to do it for both direction - both from and to the students subnet? i mean, if we only put the echo-request on the NC queue without putting the echo-reply on that specific queue - will not it also create higher latency? since the reply will go under capped students BW?I would be a little weary about implementing MPLS, might be a fun lab to try but that might be adding another layer of complextity that is unnecessary. But sure would be fun to play with
hehehe, yes - that probably would be an over-do
that was just a slight view on my head - well, i guess i should start to focus on the subject
anyway, thank you for the explanations - its a huge help reallythe More I know, that is more and More I dont know. -
Aldur Member Posts: 1,460why am i prefer to do it for both direction - both from and to the students subnet? i mean, if we only put the echo-request on the NC queue without putting the echo-reply on that specific queue - will not it also create higher latency? since the reply will go under capped students BW?
But definitely good thinking there, the echo-replies destined for the student's subnet can definitely go into the NC queue and there won't be the possible DOS attack problem you might see otherwise."Bribe is such an ugly word. I prefer extortion. The X makes it sound cool."
-Bender -
rossonieri#1 Member Posts: 799 ■■■□□□□□□□speaking of DoS,
after seeing you over and over wrote that word - again, a slight view come over to my head :
now, why am i left this word of topic left untouched?
i mean, of course it is the real purpose of CoS on the slide - to minimize DoS.
man
that will be, if we put the echo request from the outer world (that is *to* the student subnet) on their own echo-reply queue will make the CoS pointless for providing a DDoS protection - so any echo-request from the internet to the student subnet should go under the student queue, and the echo-request got limited.
nice point you have there aldur, thank youthe More I know, that is more and More I dont know.