Waste will kill ipv6 too

bzs at theworld.com bzs at theworld.com
Thu Dec 28 23:35:09 UTC 2017


On December 28, 2017 at 13:31 owen at delong.com (Owen DeLong) wrote:
 > 
 > > On Dec 28, 2017, at 11:14 , bzs at theworld.com wrote:
 > > 
 > > 
 > > Just an interjection but the problem with this "waste" issue often
 > > comes down to those who see 128 bits of address vs those who see 2^128
 > > addresses. It's not like there were ever anything close to 4 billion
 > > (2^32) usable addresses with IPv4.
 > 
 > Part of this is correct (the last sentence). There were closer to 3.2 billion
 > usable IPv4 unicast addresses.

Maybe "usable" doesn't quite express the problem. If someone grabs a
/8 and makes little use of it (as happened I believe several times)
the issue wasn't only usable but available to anyone but the /8
holder.

Whatever, not sure where that goes other than noting that address
space allocations are often sparsely utilized.

 > 
 > > We have entire /8s which are sparsely populated so even if they're 24M
 > > addrs that's of no use to everyone else. Plus other dedicated uses
 > > like multicast.
 > 
 > Well, a /8 is actually only 16.7 million, not 24M, but I’m not sure that
 > matters much to your argument.

Oops, right, 24 bits, 16M.

 > 
 > > So the problem is segmentation of that 128 bits which makes it look a
 > > lot scarier because 128 is easy to think about, policy-wise, while
 > > 2^128 isn’t.
 > 
 > Sure, but that’s intended in the design of IPv6. There’s really no need
 > to think beyond 2^64 because the intent is that a /64 is a single subnet
 > no matter how many or how few machines you want to put on it.
 > 
 > Before anyone rolls out the argument about the waste of a /64 for a point
 > to point link with two hosts on it, please consider that the relative
 > difference in waste between a /64 with 10,000 hosts on it and a /64 with
 > 2 hosts on it is less than the rounding error in claiming that a /64 is
 > roughly 18 quintillion addresses. In fact, it’s orders of magnitude less.

My worry is when pieces of those /64s get allocated for some specific
use or non-allocation. For example hey, ITU, here's half our /64s,
it's only fair...and their allocations aren't generally available
(e.g., only to national-level providers as is their mission.)

So the problem isn't for someone who holds a /64 any more than people
who are ok w/ whatever IPv4 space they currently hold.

It's how one manages to run out of new /64s to allocate, just as we
have with, say, IPv4 /16s. If you have one or more /16s and that's
enough space for your operation then not a problem. If you need a /16
(again IPv4) right now, that's likely a problem.

That's where 128 bits starts to feel smaller and 2^128 addresses a
little superfluous if you can't get /bits.

 > 
 > > My wild guess is if we'd just waited a little bit longer to formalize
 > > IPng we'd've more seriously considered variable length addressing with
 > > a byte indicating how many octets in the address even if only 2
 > > lengths were immediately implemented (4 and 16.) And some scheme to
 > > store those addresses in the packet header, possibly IPv4 backwards
 > > compatible (I know, I know, but here we are!)
 > 
 > Unlikely. Variable length addressing in fast switching hardware is “difficult”
 > at best. Further, if you only use an octet (which is what I presume you meant
 > by byte) to set the length of the variable length address, you have a fixed
 > maximum length address of somewhere between 255 and 256 inclusive unless you
 > create other reserved values for that byte and depending on whether you interpret
 > 0 to be 256 or invalid.

I was thinking 256 (255, 254 probably at most) octets of address, not
bits.

One could effect sub-octet (e.g., 63 bits) addresses in other ways
when needed, as we do now, inside a 128 bit (anything larger than 63)
address field.

 > 
 > I think that 256 octet addressing would be pretty unworkable in modern hardware,
 > so you’d have to find some way of defining and then over time changing what the
 > maximum allowed value in that field could be.

Yes, it would be space to grow. For now we might say that a core
router is only obliged to route 4 or 16 octets.

But if the day came when we needed 32 octets it wouldn't require a
packet redesign, only throwing some "switch" that says ok we're now
routing 4/16/32 octet addresses for example.

Probably a single router command or two on a capable router.

 > 
 > Now you’ve got all kinds of tables and datastructures in all kinds of software
 > that either need to pre-configure for the maximum size or somehow dynamically
 > allocate memory on the fly for each session and possibly more frequently than
 > that.

That's life in the fast lane! What can I say, the other choice is we
run out of address space. One would hope there would be some lead-in
time to any expansion, probably years of warning that it's coming.

Or we have to implement IPvN (N > 6) with new packet designs which is
almost certainly even more painful.

At least that variable length field would be a warning that one day it
might be larger than 16 octets and it won't take 20+ years next time.

 > 
 > You don’t have to dig very deep into the implementation details of variable
 > length addressing to see that there’s still, even today, 20 years after the
 > decision was made, it’s not a particularly useful answer.

It's only important if one tends to agree that the day may come in the
foreseeable future when 16 octets is not sufficient.

One only gets choices not ideals: a) run out of address space? b)
Redesign the packet format entirely? c) Use a variable length address
which might well be sufficient for 100 years?

Each would have their trade-offs.

 > 
 > > And we'd've been all set, up to 256 bytes (2K bits) of address.
 > 
 > Not really. There’s a lot of implementation detail in there and I don’t think
 > you’re going to handle 2Kbit addresses very well on a machine with 32K of RAM
 > and 2MB of flash. (e.g. ESP8266 based devices and many other iOT platforms).

Today's smart phones are roughly as powerful as ~20 year old
multi-million dollar supercomputers. No one thought that would happen
either.

But as I said it comes down to choices. Running out of address space
is not very attractive either as we have seen.

 > 
 > > If wishes were horses...but I think what I'm saying here will be said
 > > again and again.
 > 
 > Not likely… At least not by anyone with credibility.
 > 
 > > Too many people answering every concern with "do you have any idea how
 > > many addresses 2^N is?!?!" while drowning out "do you have any idea
 > > how small that N is?
 > 
 > We may, someday, wish we had gone to some value of N larger than 128,
 > but I seriously doubt it will occur in my lifetime.

I'll hang onto that comment :-)

 > 
 > Owen
 > 

-- 
        -Barry Shein

Software Tool & Die    | bzs at TheWorld.com             | http://www.TheWorld.com
Purveyors to the Trade | Voice: +1 617-STD-WRLD       | 800-THE-WRLD
The World: Since 1989  | A Public Information Utility | *oo*



More information about the NANOG mailing list