D/DoS mitigation hardware/software needed.

George Bonser gbonser at seven.com
Sun Jan 10 06:29:33 UTC 2010

> Firewalls are not designed to mitigate large scale DDoS, 

Generally speaking, if it didn't being the firewall to its knees, it
wasn't a DoS.  It was just sort of an annoying attempt at a DoS.

I think that more or less the definition of a DoS is one that exploits
the resource limitations of the firewall to deny service to everything
behind it.  The ultimate DoS, though, is simply filling the pipe with
traffic from "legitimate" data transfer requests.  Nothing you are going
to do is going to mitigate that because to stop it you have to DoS

Imagine thousands of requests per second from all around the internet
for a legitimate URL.  How do you use a firewall to separate the wheat
from the chaff? So let's say you have some client software that you want
people to download. Suddenly you are getting more download requests than
you can handle.  Nobody is flooding you with syn or icmp packets.  They
are sending a single packet (a legitimate URL) that results in you
sending thousands of packets to real IP addresses that are simply
copying the traffic to what amounts to /dev/null.  Now when your
download server gets slow, things get worse because connections begin to
take longer to clear.  The kernel on the web server is able to handle
the tcp/ip setup fairly quickly but getting the file actually shipped
out takes time.  As connections build up on the firewall, it finally
reaches a point where it is out of RAM in storing all those NAT
translations and connection state.  

Now you start noticing that services not under attack are starting to
slow down because the firewall has to sort through an increasingly large
connection table when doing stateful inspection of traffic going to
other services.  All the while, there really isn't anything the firewall
can do to mitigate the traffic because it is all correct and

Basically you are being Slashdotted or experiencing the Drudge Effect
but in this case you are being botnetted.

If you have the server capacity to keep up, now your outbound pipe to
the Internet is filling up, you are dropping packets, TCP/IP connections
begin to back off, connections back up even more and at some point the
firewall just gives up by failing over to the secondary, which then
promptly fails back to the primary and you bounce back and forth in that
state for a while and then finally it just gets hung someplace and the
whole thing is stuck.  And during the entire incident there was no
"illegal traffic" that your firewall could have done a thing to block.

Oh, and rate limiting connections isn't going to fix things either
unless you can do it on a per URL basis.  Maybe the rate of requests for
/really-big-file.tgz that clogs your system is way different than the
rate of requests for /somewhat-smaller-file.tgz or /index.html

More information about the NANOG mailing list