Important IPv6 Policy Issue -- Your Input Requested

Iljitsch van Beijnum iljitsch at muada.com
Tue Nov 9 08:17:30 UTC 2004


On 8-nov-04, at 23:15, Leo Bicknell wrote:

>> Well, if they can manage to interconnect all those networks a tiny
>> amount of coordination isn't too much to ask for. Also, with the  
>> proper
>> hashing this shouldn't be much of a problem even without coordination.
>> Yes, no coordination and bad hashing won't work, but guess what: don't
>> do that.

> It is too much to ask for, because you assume it's one company day
> one.  What happens when AOL and Time Warner merge?  There was no
> chance of coordination before that.  Or how about Cisco?  They buy
> what, 100-200 companies a year?

If both companies use either registered globally unique space (which  
also has the important property you get to know who the packets come  
from when they show up in the wrong places) or use the unregistered  
variant with proper hashing, the chance of collisions is negligible.

So it IS possible to make sure bad things don't happen in advance.

I suspect most people who don't bother with the hashing are ones who  
don't expect to interconnect with anyone else using those addresses.  
Obviously some will be wrong about that.

And unlike IPv4, it's easy to give all hosts more than one address and  
renumbering the hosts themselves is fairly straightforward.

> My problem is that even with good hashing it doesn't take long for
> there to be a collision.  And once there is a single collision the
> whole system is suspect.  It's the promise of "if you do this extra
> work you'll never have to renumber" without delivering.

Disagree. You know the hashing space isn't 100% unique. And it doesn't  
need to be: it only needs to be unique between the people who use the  
new type of site local addresses to communicate between them. IIRC  
there are 40 bits so when a million or so of these prefixes are used  
you're going to start seeing some collisions - if you look globally.  
But the chance of two networks colliding is still one in a trillion  
(with good hashing) and the chance of a thousand networks colliding is  
almost zero. This is actually smaller than the chance that two ethernet  
cards have the same MAC address. (Once in a blue moon batches of NICs  
with the same MAC address are built, but the chance of finding two that  
clash is relatively large if you buy several because they are likely  
manufactured right after each other.)

And there's still the registered variant.

> No, my argument is that it only takes a few stupid people to make
> this entire system not work at all.

I don't see this.

> If this draft had a chance of working then there would be no need to  
> create
> a central registry to guarantee unique addresses.  The very existence  
> of
> that draft shows some people realize this method will not work.

With registered space you have the additional benefit that when packets  
leak, they can be traced back to the originator, and it's possible to  
delegate name service.

> In this system anyone can get
> something for free anytime they want.  "Lose" your address block?
> Make it unusable for some purpose (eg, blacklisted)?  Just want a
> second (third, fourth, millionth) block, just go get it.  Get a block,
> then die?  Well, no one else can ever use your personal block.

I agree in principle. However, I think this means the administrative  
procedure must be changed rather than that it means this type of  
address space is a bad idea.

I think all of this can be done in very similar vain to registering  
domains. Since there is plenty of competition there, this will probably  
be sufficiently cheap that even organizations in less developed  
countries can afford it.

>> That's nice. But it simply can't be done for any significant number of
>> PI prefixes. That's why we're going through so much trouble to create  
>> a
>> multihoming mechanism that doesn't kill the routing system.

> Bah, hand-waving that makes no sense.

It's starting to, although there is still a lot of work to do. (Yes,  
long overdue...)

> There are 33,000 allocated ASN's today.  Give each one a PI prefix
> (however they might get it).  That's 33,000 routes.  Given my routers
> are fine with 140,000 now, and are being tested in labs to well
> over 1 million and I fail to see the issue.

Well, I can't _guarantee_ routers are going to explode when people  
start doing PI in IPv6, but I think they will, eventually. The big  
difference with IPv4 is that in IPv4, there is still a significant  
hurdle to multihoming, as you need at least a /24. In IPv4 _everyone_  
gets to have a /48. And once so many important services sit in /48s  
that you can't filter them individually anymore, you need to allow all  
/48s in your routing tables and then you're at the mercy of how popular  
multihoming is going to be. It could easily end well (multihoming isn't  
that popular today) but the risk of it going very badly is just too big  
IMO.

Also, remember that BGP convergence scales at O(n.log(n)). (You need to  
go through the existing routing table which is at best a log(n)  
operation for each new route.) This means the routing table growth  
needs to stay well below Moore or there's trouble.

> More to the point, if most network admins have the choice of running a
> full overlay network and updating software on every end host to be more
> complex to make it understand the overlay networks or puting a few more
> prefixes in the routing table and upgrading your router I bet they will
> all pick the latter.

The trouble is that straight PI means painting ourselves in a corner.  
Sure, the room may be big and there may not be that much paint that it  
becomes a problem, but once you start there is no going back. I thought  
we could have our cake and eat it too by aggregating PI space  
geographically  
(http://www.muada.com/drafts/draft-van-beijnum-multi6-isp-int-aggr 
-01.txt for those who care) but the G word is taboo in the IETF. Oh  
well.

> If those groups used this space even only internally
> at first between each other (after all, the purpose is to allow
> routing between organizations, just not to the global internet)
> eventually there will be great pressure to add them to the global
> table.  It will be phrased as "UUNet won't accept prefixes from all
> of Asia" or similar.  Then we end up having to accept them with
> none of the controls the RIR system puts in place for setting policy
> or anything else.  Prefixes will instead be randomly assigned
> worldwide out of a single /7.

There are enough stubborn network operators to guarantee that this  
space will never be globally reachable the same way PA space is. So  
even if people get to use their PI prefixes for lots of things, they'll  
still need to have PA addresses too for global reachability. I actually  
think that would be a very good result is it allows for a dynamic  
tradeoff in routing table size and usability, while either extreme has  
the potential to be harmful. (Sure, it sucks to have to try more than  
one address but that's quickly becoming a reality as we move to  
IPv4+IPv6 dual stack networks anyway.)




More information about the NANOG mailing list