Rate-limiting BCOP?

adamv0025 at netconsultings.com adamv0025 at netconsultings.com
Sun May 31 14:36:55 UTC 2020


> Saku Ytti
> Sent: Friday, May 22, 2020 7:52 AM
> 
> On Thu, 21 May 2020 at 22:11, Bryan Holloway <bryan at shout.net> wrote:
> 
> > I've done all three on some level in my travels, but in the past it's
> > also been oftentimes vendor-centric which hindered a scalable or
> > "templateable" solution. (Some things police in only one direction, or
> > only well in one direction, etc.)
> 
> Further complication, let's assume you are all-tomahawk on ASR9k.
> Let's assume TenGigE0/1/2/3/4 as a whole is pushing 6Gbps traffic across all
> VLAN, everything is in-contract, nothing is being dropped for any VLAN in any
> class. Now VLAN 200 gets DDoS attack of 20Gbps coming from single
> backbone interface. I.e. we are offering that tengig interftace 26Gbps of
> traffic. What will happen is, all VLANs start dropping packets QoS unaware,
> 12.5Gbps is being dropped by the ingress NPU which is not aware to which
> VLAN traffic is going nor is it aware of the QoS policy on the egress VLAN. 
>
Hmm, is that so?
Shouldn’t the egress FIA(NPU) be issuing fabric grants (via central Arbiters) to ingress FIA(NPU) for any of the VOQs all the way up till egress NPU's processing capacity, i.e. till the egress NPU can still cope with the overall pps rate (i.e. pps rate from fabric & pps rate from "edge" interfaces), subject to ingress NPU fairness of course?
Or in other words, shouldn't all or most of the 26Gbps end up on egress NPU, since it most likely has all the necessary pps processing capacity to deal with the packets at the rate they are arriving, and decide for each based on local classification and queuing policy whether to enqueue the packet or drop it?  

Looking at my notes, (from discussions with Xander and Thuijs and Aleksandar Vidakovic):
Each 10G entity is represented by on VQI = 4 VOQs (one VOQ for each priority level)
The trigger for the back-pressure is the utilisation level of RFD buffers. 
RFD buffers are holding the packets while the NP microcode is processing them. 
If you search for the BRKSPG-2904, the more feature processing the packet goes through, the longer it stays in RFD buffers.
RFD buffers are from-fabric feeder queues. - Fabric side backpressure kicks in if RFD queues are more than 60% full

So according to the above, should the egress NPU be powerful enough to deal with 26Gbps of traffic coming from fabric in addition to whatever business as usual duties its performing, (i.e RFD queues utilization is below 60%) then no drops should occur on ingress NPU (originating the DDoS traffic).
      

> So
> VLAN100 starts to see NC, AF, BE, LE drops, even though the offered rate in
> VLAN100 remains in-contract in all classes.
> To mitigate this to a degree on the backbone side of the ASR9k you need to
> set VoQ priority, you have 3 priorities. You could choose for example BE P2,
> NC+AF P1 and LE Pdefault. Then if the attack traffic to
> VLAN200 is recognised and classified as LE, then we will only see
> VLAN0100 LE dropping (as well as every other VLAN LE) instead of all the
> classes.
> 





More information about the NANOG mailing list