UDP clamped on service provider links
Tom Sands
tsands at rackspace.com
Fri Jul 31 02:44:37 UTC 2015
We have similar problems with UDP 500 and being able to keep IPSEC tunnels up over Level3. It happens quite a bit when there are no signs of TCP or ICMP packet loss.
Sent from my iPhone
> On Jul 30, 2015, at 9:14 PM, Jason Baugher <jason at thebaughers.com> wrote:
>
> To bring this discussion to specifics, we've been fighting an issue where
> our customers are experiencing poor audio quality on SIP calls. The only
> carrier between our customers and the hosted VoIP provider is Level3. From
> multiple wiresharks, it appears that a certain percentage of UDP packets -
> in this case RTP - are getting lost in the Level3 network somewhere. We've
> got a ticket open with Level3, but haven't gotten far yet. Has anyone else
> seen Level3 or other carriers rate-limiting UDP and breaking these
> legitimate services?
>
>> On Thu, Jul 30, 2015 at 3:45 PM, John Kristoff <jtk at cymru.com> wrote:
>>
>> On Mon, 27 Jul 2015 19:42:46 +0530
>> Glen Kent <glen.kent at gmail.com> wrote:
>>
>>> Is it true that UDP is often subjected to stiffer rate limits than
>>> TCP?
>>
>> Yes, although I'm not sure how widespread this is in most, if even many
>> networks. Probably not very widely deployed today, but restrictions and
>> limitations only seem to expand rather than recede.
>>
>> I've done this, and not just for UDP, in a university environment. I
>> implemented this at time the Slammer worm came out on all the ingress
>> interfaces of user-facing subnets. This was meant as a more general
>> solution to "capacity collapse" rather than strictly as security issue,
>> because we were also struggling with capacity filling apps like Napster
>> at the time, but Slammer was the tipping point. To summarize what we
>> did for aggregate rates from host subnets (these were generally 100 Mb/s
>> IPv4 /24-/25 LANs):
>>
>> ICMP: 2 Mb/s
>> UDP: 10 Mb/s
>> MCAST: 10 Mb/s (separate UDP group)
>> IGMP: 2 Mb/s
>> IPSEC: 10 Mb/s (esp - can't ensure flow control of crypto traffic)
>> GRE: 10 Mb/s
>> Other: 10 Mb/s for everything else except for TCP
>>
>> If traffic was staying local within the campus network, limits did not
>> apply. There were no limits for TCP traffic. We generally did not
>> apply limits to well defined and generally well managed server subnets.
>> We were aware that certain measurement tools might produce misleading
>> results, a trade-off we were willing to accept.
>>
>> As far as I could tell, the limits generally worked well and helped
>> minimize Slammer and more general problems. If ISPs could implement a
>> similar mechanism, I think this could be a reasonable approach today
>> still. Perhaps more necessary than ever before, but a big part of the
>> problem is that the networks where you'd really want to see this sort
>> of thing implemented, won't do it.
>>
>>> Is there a reason why this is often done so? Is this because UDP
>>> is stateless and any script kiddie could launch a DOS attack with a
>>> UDP stream?
>>
>> State, some form of sender verification and that it and most other
>> commonly used protocols besides TCP do not generally react to implicit
>> congestion signals (drops usually).
>>
>>> Given the state of affairs these days how difficult is it going to be
>>> for somebody to launch a DOS attack with some other protocol?
>>
>> There has been ICMP-based attacks and there are, at least in theory if
>> not common in practice, others such as IGMP-based attacks. There have
>> been numerous DoS (single D) attacks with TCP-based services precisely
>> because of weaknesses or difficulties in managing unexpected TCP
>> session behavior. The potential sending capacity of even a small set
>> of hosts from around the globe, UDP, TCP or other protocol, could
>> easily overwhelm many points of aggregation. All it takes is for an
>> attacker to coerce that a sufficient subset of hosts to send the
>> packets.
>>
>> John
>>
More information about the NANOG
mailing list