IPv6 - real vs theoretical problems

Tony Hain alh-ietf at tndh.net
Mon Jan 10 19:46:23 UTC 2011


... yes I know you understand operational issues.

While managed networks can 'reverse the damage', there is no way to fix that
for consumer unmanaged networks. Whatever gets deployed now, that is what
the routers will be built to deal with, and it will be virtually impossible
to change it due to the 'installed base' and lack of knowledgeable
management. 

It is hard enough getting the product teams to accept that it is possible to
build a self-configuring home network without having that be crippled by
braindead conservation. The worst possible value I can see for delegation to
the home is /56, yet that is the most popular value because people have
their heads so far into the dark void of conservation they can't let accept
that the space will be 'wasted sitting on the shelf at IANA when somebody
comes along with a better idea in the next 500 years'. 

I understand the desire to 'do it like we do with IPv4', because that
reduces the learning curve, but it also artificially restricts IPv6, ensures
that the work is doubled to remove the restraints later, and makes it even
harder to show value in the short term because 'it is just like IPv4 with a
different bit pattern'. "IPv6 is not just IPv4 with bigger addresses" no
matter what the popular mantra is. The only way you can even get close to
that kind of argument is if you totally myopic on BGP, and even then there
are differences. 

Bottom line, just fix the tools to deal with the reality of IPv6, and move
on. 
Tony


> -----Original Message-----
> From: Deepak Jain [mailto:deepak at ai.net]
> Sent: Thursday, January 06, 2011 2:01 PM
> To: NANOG list
> Subject: IPv6 - real vs theoretical problems
> 
> 
> Please, before you flame out, recognize I know a bit of what I am
> talking about. You can verify this by doing a search on NANOG archives.
> My point is to actually engage in an operational discussion on this and
> not insult (or be insulted).
> 
> While I understand the theoretical advantages of /64s and /56s and /48s
> for all kinds of purposes, *TODAY* there are very few folks that are
> actually using any of them. No typical customer knows what do to do
> (for the most part) with their own /48, and other than
> autoconfiguration, there is no particular advantage to a /64 block for
> a single server -- yet. The left side of the prefix I think people and
> routers are reasonably comfortable with, it's the "host" side that
> presents the most challenge.
> 
> My interest is principally in servers and high availability equipment
> (routers, etc) and other things that live in POPs and datacenters, so
> autoconfiguration doesn't even remotely appeal to me for anything. In a
> datacenter, many of these concerns about having routers fall over exist
> (high bandwidth links, high power equipment trying to do as many things
> as it can, etc).
> 
> Wouldn't a number of problems go away if we just, for now, follow the
> IPv4 lessons/practices like allocating the number of addresses a
> customer needs --- say /122s or /120s that current router architectures
> know how to handle -- to these boxes/interfaces today, while just
> reserving /64 or /56 spaces for each of them for whenever the magic day
> comes along where they can be used safely?
> 
> As far as I can tell, this "crippling" of the address space is
> completely reversible, it's a reasonable step forward and the only
> "operational" loss is you can't do all the address jumping and
> obfuscation people like to talk about... which I'm not sure is a loss.
> 
> In your enterprise, behind your firewall, whatever, where you want
> autoconfig to work, and have some way of dealing with all of the dead
> space, more power to you. But operationally, is *anything* gained today
> by giving every host a /64 to screw around in that isn't accomplished
> by a /120 or so?
> 
> Thanks,
> 
> DJ
> 
> 





More information about the NANOG mailing list