Waste will kill ipv6 too

Owen DeLong owen at delong.com
Thu Dec 28 21:31:20 UTC 2017


> On Dec 28, 2017, at 11:14 , bzs at theworld.com wrote:
> 
> 
> Just an interjection but the problem with this "waste" issue often
> comes down to those who see 128 bits of address vs those who see 2^128
> addresses. It's not like there were ever anything close to 4 billion
> (2^32) usable addresses with IPv4.

Part of this is correct (the last sentence). There were closer to 3.2 billion
usable IPv4 unicast addresses.

> We have entire /8s which are sparsely populated so even if they're 24M
> addrs that's of no use to everyone else. Plus other dedicated uses
> like multicast.

Well, a /8 is actually only 16.7 million, not 24M, but I’m not sure that
matters much to your argument.

> So the problem is segmentation of that 128 bits which makes it look a
> lot scarier because 128 is easy to think about, policy-wise, while
> 2^128 isn’t.

Sure, but that’s intended in the design of IPv6. There’s really no need
to think beyond 2^64 because the intent is that a /64 is a single subnet
no matter how many or how few machines you want to put on it.

Before anyone rolls out the argument about the waste of a /64 for a point
to point link with two hosts on it, please consider that the relative
difference in waste between a /64 with 10,000 hosts on it and a /64 with
2 hosts on it is less than the rounding error in claiming that a /64 is
roughly 18 quintillion addresses. In fact, it’s orders of magnitude less.

> My wild guess is if we'd just waited a little bit longer to formalize
> IPng we'd've more seriously considered variable length addressing with
> a byte indicating how many octets in the address even if only 2
> lengths were immediately implemented (4 and 16.) And some scheme to
> store those addresses in the packet header, possibly IPv4 backwards
> compatible (I know, I know, but here we are!)

Unlikely. Variable length addressing in fast switching hardware is “difficult”
at best. Further, if you only use an octet (which is what I presume you meant
by byte) to set the length of the variable length address, you have a fixed
maximum length address of somewhere between 255 and 256 inclusive unless you
create other reserved values for that byte and depending on whether you interpret
0 to be 256 or invalid.

I think that 256 octet addressing would be pretty unworkable in modern hardware,
so you’d have to find some way of defining and then over time changing what the
maximum allowed value in that field could be.

Now you’ve got all kinds of tables and datastructures in all kinds of software
that either need to pre-configure for the maximum size or somehow dynamically
allocate memory on the fly for each session and possibly more frequently than
that.

You don’t have to dig very deep into the implementation details of variable
length addressing to see that there’s still, even today, 20 years after the
decision was made, it’s not a particularly useful answer.

> And we'd've been all set, up to 256 bytes (2K bits) of address.

Not really. There’s a lot of implementation detail in there and I don’t think
you’re going to handle 2Kbit addresses very well on a machine with 32K of RAM
and 2MB of flash. (e.g. ESP8266 based devices and many other iOT platforms).

> If wishes were horses...but I think what I'm saying here will be said
> again and again.

Not likely… At least not by anyone with credibility.

> Too many people answering every concern with "do you have any idea how
> many addresses 2^N is?!?!" while drowning out "do you have any idea
> how small that N is?

We may, someday, wish we had gone to some value of N larger than 128,
but I seriously doubt it will occur in my lifetime.

Owen




More information about the NANOG mailing list