NANOG36-NOTES 2006.02.13 talk4 DNS infrastructure distribution

Matthew Petach mpetach at netflight.com
Mon Feb 13 16:58:15 UTC 2006


2006.02.13 Steve Gibbard

DNS infrastructure Distribution
Steve Gibbard
Packet Clearing House
http://www.pch.net/
scg at pch.net

Introduction
Previous talk on importance of keeping criticical
 infrastructure local
Without local infastructure, local communications are
 subject to far away outages, costs, and performance
Critical infrastructure includes DNS
If a domain is critical, so is everything above it in the
 hierarchy
Sri Lanka a case in point.

Previous talk was in Seattle last spring, highlighted
undersea cable being cut; even local DNS queries failed
since TLD servers couldn't be reached, even though
local connectivity still worked.  The ship dragging
anchor in harbor cut only undersea path out of the
country; international calling was down, and all of
the Internet.  But unlike local telephone system,
even local networks failed to work.

Root server placement
Currently 110 root servers(?)
 Number is a moving target
Operated by 12 organizations
13 IP addresses
 at most 13 servers visible from any one place at any one
  time
 six are anycast
 four are anycasted in large numbers
All remaining unicast roots are in the bay area, LA,
 or washington DC

Distribution by continet
34 in NA
 8 each in BA/DC/ 5 in LA
 Only non-coastal roots in US are Chicago and Atlanta
 canada, monterrey, mexico some others
34 in Europe
 clusters of 4 each in London, and amsterdam, Europe's
  biggest exchanges
 even throughout rest of europe for rest.

Distribution by continent
26 in Asia (excluding middle east)
5 in japan (4 tok, 1 kyoto)
3 in india, korea, singapore
2 in hongkong, jakarta, and beijing
south asia an area of rapid expansion
6 in australia/new zealand
 2 in brisbane
 1 each in auckland, perth, sydney, and wellington

5 in middle east
 1 each ankara, tel aviv, doha, dubai, abu dhabi
3 in africa
 2 in johannesburg
1 in nairobi, 1 more being shipped
very little intercity onr intercountry connectivity
2 in SA
 sao palo
 santiago de chile

other parts of world not really served at all.
world map with blobs showing coverage.  Huge areas
not covered.
overlaid fiber maps with dots to get ideas of
coverage (redundant); everyone else is one fiber
or satellite cut from being isolated and dark.
Pretty much follows the areas with money.

Root server expansion
4 of 12 root servers actively installing new roots
110 root servers big improvement over 13 from 3 years
 ago
two operators (autonomica, ISC) (I and F) are installing
wherever they can get funding
 funding sources typically RIRs, local governments, or
  ISP associations
 Limitations in currently unserved areas are generally due
  to lack of money

Fs and Is
In large portions of world, several closest roots are
 Is and Fs
 At most 2 root IP addresses visible; others far way
 Does this matter?
  gives poorly connected regions less ability to use
   BINDs failure and closest server detection mechanisms
  non-BIND implementations may default to far-away roots
 Should all 13 roots be anycasted evenly?
  CAIDA study from 2003 assumed a maximum of 13 locations;
   not really relevant anymore

Big Clusters
Lots of complaints about uneven distribution
Only really a concern if resources are finite
Large numbers in some places donesn't prevent growth in
 others
Bay Area and DC clusters seem a bit much, but sort of match
 topology
Western Europe's dense but relatively even distribution
 exactly right
Two per city perhaps a good goal for everywhere

TLD distribution
Like the root, locally used TLDs need to be served
  locally
Locally used TLDs: local ccTLD; any other TLDs commonly
 in use
Regions don't need ALL TLDs.

gTLD distribution: .com/.net
.com/.net
 well connected to the "internet core"
 servers in the big cities of US, Europe, Asia
 non-core location: sydney.

Map of world with .com/.net overlaid with fiber maps
shows "well-served areas" again following the money,
with even less coverage outside NA/Europe/Asia.

gTLD dist: .org/.info/.coop
share same servers
considered confidential.  data may be incomplete
significantly fewer publically visible servers,
almost all in internet core.
only one public locatino in each of asia and europe

Even worse coverage worldwide, though they do have
south africa.

Do have some caching boxes next to caching resolvers
at the big ISPs; not sure if it increases coverage
or not.

Few other gTLDs, didn't map them.
.gov is us-centric
.edu is US, some eu, some asia
.int is california, netherlands, UK
  (not very international!!)

Where should gTLDs be?
presumably depend on their markets
if it's ok for large portiions of the world to not use
  those gTLDs, then it's OK for them to not be hosted there.

ccTLD dist:
 answers to where ccTLDs should be more straightforward
  working in their own regions a must
  working in the "core" could be a plus
just over 2/3 of ccTLDs are hosted in their own
  countries
(but a lot of those aren't ...

Green map shows those countries that host their own
ccTLDs locally.  Most islands are red, in danger of
being cut off from their ccTLDs.

ccTLDs not slaved in the core
18 ccTLDs aren't slaved in the global core
if their regions are cut off, those ccTLDs won't be visible
 to the rest of the world
is that really an issue, if you can't get to the end site
  anyhow?
 violation of RFC2182, unclear data results
 not so much matters if nobody from out

.bb
.bd
.bh
.cn
.ec
.gf
.jm
.kg
.kw
.mp
.mq

Example countries
Kenya
 exchange point, root server, ccTLD server, all external
  connectivity by satellite
Pakistan
 root server, no exchange point, no TLDs locally
 (so how much use is the local root server?)

Kenya:
 local exchange in nairobi
 root server
 ccTLD server
 so even if external link goes down, country can stay
 mostly functional.

Pakistan:
 local root server (for at least one ISP)
 no TLDs
 .pk hosted entirely in US
 no local exchange to share local root server
 single fiber connection; when it breaks, nothing works.

Local peering caveat
local traffic has to be kept local before keeping DNS local
 is of much benefit.
 Requires either strict monopoly, or local exchange points
Examples here highlight that.

Methodology
Get name server addresses for TLDs
Assume everything in a /24 is same place or set of places.
(really down and dirty shell scripts)
  bad assumption for UUnet nameservers; didn't find others.
  625 /24s contain name servers for TLDs
  135 host multiple TLDs; over 60 in RIPEs case
Figure out where those subnets are
 traceroutes/ask questions

Subnets with 10+ TLDs--read it from the slides.  :D
193.0.12/24
192.36.125/24

Other sources
www.root-servers.org had root server data; assumed accurate.
ultraDNS locations considers its locations confidential
 Got info from Afilias's .Net application.  Told missed some.
In general, most other TLD operators were very helpful.

Thanks!

http://www.pch.net/resources/papers/infrastructure-distribution/

Mark Kosters, Verisign; notes there's two other root
server groups also going anycast wherever people will
pay to host them.  K with RIPE is now going outside
region, and Verisign (J?) is also talking about
serving in multiple regions.
Dealing with local customs getting in country tends
to be the biggest challenge; PCH has seen similar
challenges getting into countries.

OK, break time now.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nanog.org/pipermail/nanog/attachments/20060213/c85743b8/attachment.html>


More information about the NANOG mailing list