What is the limit? (was RE: multi-homing fixes)

Roeland Meyer rmeyer at mhsc.com
Tue Aug 28 23:54:43 UTC 2001


|> From: Martin, Christian [mailto:cmartin at gnilink.net]
|> Sent: Tuesday, August 28, 2001 3:22 PM
|> 
|> This is the umpteenth time that this type of thread and its 
|> spawn have been religiously fought out on NANOG.

I didn't notice any wildly religious views, this time around. Maybe, with
practice, we are getting better at it?

|> What we don't see is any real data that indicates who
|> is right and who is wrong.  The only empirical data
|> shows a comparison between routing table growth and Moore's 
|> law, which doesn't amount to a whole hill of beans
|> if there isn't a reasonable frame of reference to which
|> we can make comparisons.

This is a serious problem. Can we not issue a writ of habeus corpus? Where
is the body of evidence for the claims? Yes, I see a theoretical problem
and, yes, there is sufficient argument that someone should start work on it
(IETF?). However, is anyone working on it?

In the meanwhile, I see no reason to bring the Internet to a screeching
halt, based on the evidence thus far presented.

|> To offer an analogy, what we are doing is standing in front
|> of a bridge that says "Maximum Weight - Not Too Much Or It 
|> Might Collapse".

This reminds me much of the arguments presented by D'Crock wrt the imminent
collapse if only a few more TLDs were introduced. Which, in light of the
volume in COM alone, was absurd. We, involved in the ICANN, are adding a
bunch more than a few, with no software changes. Collapse is not expected
anytime soon. Those of us running the independent root-zones have done the
tests (which D'Crock was trying to forestall with such argument) and found
the claims baseless and not amounting to anything more than FUD. The proof
is in the new ICANN TLD roll-outs. The instance-proof was actually there all
along, since NSI was running both root servers and gTLD servers on the same
boxen.

I am now seeing the same sort of argumentation. Only this time, the
objective is to try and keep folks from being able to multi-home or
otherwise achieve independence. Jeez, in both cases, the upper-level goal
seemed to be the same. Why is that? BTW, I have direct evidence to the
above, all the way back to the newdom list. Also, the FUD did delay things a
*bunch* of years. Does this sound familiar?

|> I think that what we need to do is have a fourth group, call 
|> them Internet Engineers for lack of a better word,
|> come in and determine what the sign should read.

Now *that* would indeed be novel. However, I suspect that the answer would
be as elusive as the same answer was for the DNS. The bottom-line is that it
appears to be bandwidth and hardware limited. Just like the DNS. The
architecture appears to be open-ended. I must confess that I know the DNS
much better than I know BGP. The projected worst-case scenario *appears* to
be artificial, given the evidence at hand. It doesn't follow that what *can*
happen, *will* happen. The Sun could go nova tomorrow, but I'm betting that
it wont.

|> So I plead with you all, lets end this war of baseless 
|> claims and move toward a solution, by first determining
|> at what point we need one, and then determining what it is.

Realistic projected usage scenarios might be a good place to start. It's
amazing how real requirements have a way of actualizing the problem. Let's
get the proper horse/cart relationship, first.

|> TTFN,
|> -chris (who happens to be a pragmatist)

-roeland (who realizes that business needs drive IT, not the other way
around)



More information about the NANOG mailing list