[arin-announce] ARIN Resource Certification Update

Christopher Morrow morrowc.lists at gmail.com
Tue Jan 25 06:25:49 UTC 2011


On Mon, Jan 24, 2011 at 11:52 PM, Roland Dobbins <rdobbins at arbor.net> wrote:
>
> On Jan 25, 2011, at 11:35 AM, Christopher Morrow wrote:
>
>> thinking of using DNS is tempting
>
>
> The main arguments I see against it are:
>
> 1.      Circular dependencies.

in the end though... if you depend upon something off-box to get you
going, you're screwed.

What makes that slightly better, in the case of the planned work so
far (sidr work), is that the router feeds from an operator-decided
location (direct-link, pop-local, region-local, network-local,
neighbor-network). At initial boot time (for a long time probably)
having 'valid' routes is less important than 'some routes'. Failing to
'get routing up' before 'secureify things', I think is the goal.

With the ability to ratchet down the validation knob at operator
demand when they feel 'validated only'  is a good choice.

> 2.      The generally creaky, fragile, brittle, non-scalable state of the overall DNS infrastructure in general.
>

this is getting better, no? I mean for the in-addr and larger folks,
anycast + lots of other things are making DNS much more reliable than
it was 10 years ago... or am I living in a fantasy world?

NOTE: I leave out unprepared (or under-prepared) end-sites wrt
dos/ddos ... though I suppose in last month's example: Mastercard.com
probably would have had (has?) their PTR records served from servers
on the same end-site as their attacked asset :( so that's a failure
mode to keep in mind (and extra things for operators at sites to keep
in mind as well).

> Routing and DNS, which are the two essential elements of the Internet control plane, are e also its Achilles' heels.  It can be argued that making routing validation dependent upon the DNS would make this situation worse.
>

to some extent it will be, folks won't revert to /etc/hosts for
getting to publication points, cache servers, etc ... BUT there are
timescales measured here not in 'milliseconds' but in hours.
Small-scale outages aren't as damaging, and if your cache infra is
planned such that once you get external data internally you can feed
all your regional/pop/etc caches from there, hopefully things are
simpler.

> The main reasons for it are those Danny stated:
>
> 1.      DNS exists.
>
> 2.      DNSSEC is in the initial stages of deployment.
>
> 3.      There's additional relevant work going on which would make DNS more suitable for this application.
>
> 4.      Deployment inertia.
>

yea, but I see forking this into DNS as having to hack about in
something where it doesn't really fit well, and may end up with more
hackery after the initial thoughts of: "Ah! just toss some new RR foo
in there, sign with dnssec, win!"

now we have:
  o oh, and don't keep all of your DNS servers on your network, in
case of an outage.
  o don't forget about TTLs on records, how do you expire something?
(this is a perennial problem in dns...)
  o delegating of subnets around to customers, on gear they operate (or don't??)

There are likely more things to keep in mind as well.

-Chris




More information about the NANOG mailing list