register.com down sev0?

Patrick W. Gilmore patrick at ianai.net
Thu Oct 26 04:59:30 UTC 2006


On Oct 26, 2006, at 12:14 AM, alex at pilosoft.com wrote:
> On 26 Oct 2006, Paul Vixie wrote:
>
>> i wonder if that's due to the spam they've been sending out?
> Paul, this isn't nanae. Let's not sling accusations like that wildly.

Accusations and objective facts are two separate things.


>> there is no zone anywhere, including COM, the root zone, or any  
>> other,
>> that is immune from worst-case DDoS.  anycast all you want.   
>> diversify.
>> build a name service infrastructure larger than the earth's moon.   
>> none
>> of that will matter as long as OPNs (the scourge of internet  
>> robustness)
>> still exist.
> This isn't 2001, and, I will argue that it *is*, in fact, possible  
> to be
> protected from a "worst case" ddos, and not at obscene price.

You are mistaken.


> However,
> even if you argue that point, there's no excuse for not being  
> prepared at
> all, and not following the BCP. While we all may be guilty of not  
> having
> topologically/geographically diverse DNS - for someone whose core  
> business
> is DNS, that's unexcusable.

We agree.


>>> Given that register.com is/was public (I think?) - I wonder what  
>>> are their
>>> sarbox auditors saying about it now ;)
>>
>> that's an easy but catty criticism, and baseless.  i'm sure that some
>> way could be found to improve register.com's infrastructure, and i  
>> don't
>> just mean by stopping the spamming they've been doing.  but it's not
>> trivial and in the face of well-tuned worst-case DDoS, nothing will
>> help.
> Well, let's talk about "worst-case ddos". Let's say, 50mpps (I have  
> not
> heard of ddos larger that that number). Let's say, you can sink/filter
> 100kpps on each box (not unreasonable on higher-end box with nsd).  
> That
> means, you should be able to filter this attack with ~500 servers,
> appropriately place. Say, because you don't know where the attack will
> come in, you need 4 times more the estimated number of servers, that's
> 2000 servers. That's not entirely unreasonable number for a large  
> enough
> company.

Even assuming your numbers, which I do not grant, you are still  
mistaken.

There is no single "appropriately[sic] place" which can absorb  
50Mpps.  If you meant "appropriately placed" (as in topologically  
dispersed locations), a well crafted attack could still guarantee _at  
least_ a partial DoS from an end user PoV.

It is essentially impossible to distinguish end-user requests from  
(im)properly created DoS packets (especially until BCP38 is widely  
adopted - i.e. probably never).  Since there is no single place - no  
13 places - which can withstand a well crafted DoS, you are  
guaranteed that some users will not be able to reach any of your  
listed authorities.

This is not speculation, this is fact.  All a good provider can do,  
even with 1000s of server, is minimize the impact of any DoS.

Oh, and putting 2K servers into the "right" places is not a trivial  
expense, even for a large company.  Last time I checked, 10GE pipes  
were not handed out for free.  And you can't just rack these things  
in mom-and-pop colo saying "well, it has a GigE on the motherboard"  
when the colo has an OC3 to the 'Net.  The Cap- and Op-Ex involved in  
doing what you suggest properly is large enough to probably be  
prohibitively expensive for a company like register.com.


> I know that the above was just rough back-of-the-envelope, and  
> things are
> far more complicated than that, but this discussion does not really  
> belong
> to nanog-l.

We disagree.  Keeping large name servers running is _absolutely_ a  
network operations topic.  Not only is the defense mostly network  
based (since the network is the most likely thing to break), network  
operators are the people who get the phone calls when DNS does break.

-- 
TTFN,
patrick






More information about the NANOG mailing list