Sean M. Doran
smd at clock.org
Thu Sep 18 16:55:37 UTC 1997
"Jay R. Ashworth" <jra at scfn.thpl.lib.fl.us> writes:
> Are there any major potholes in this theory that I'm missing?
Well, you have two technical problems to solve: firstly,
the same numbering problem that anyone else has,
viz. addresses will change. Secondly, you have a traffic
attraction/traffic dispersion problem for non-local
connectivity. You also have to provide better
value-for-money than the classical hierarchy-of-providers
model your competitors will be using.
The "classical" approach is to renumber to solve the first
case and do the oh-so-fun BGP tricks Dennis Ferguson
described here a couple of incarnations ago.
A better approach to both problems is to use NAT to deal
with the renumbering issue, and large-scale NAT to deal
with your border problem (you not only want to reduce the
number of prefixes you advertise outbound, and use the DNS
to offer back different topolical locators (i.e., IP
addresses) for the things connected to you, but you also
want to reduce the amount of information you take in from
the outside world).
To deal with connectivity failures outside the NATs
themselves you build tunnels through working inside or
outside infrastructure between your NATs. This is
straightforward and is what is done now.
Dealing with the failures of the NATs themselves requires
synchronized or deterministic address mappings,
NAT-friendly higher-layer protocols, and a simple IGP.
With some performance-affecting trade-offs you can deal
with many NAT-unfriendly higher-layer protocols in various
ways too, mostly by sharing state information among your
More information about the NANOG