Waste will kill ipv6 too

Lyndon Nerenberg lyndon at orthanc.ca
Fri Dec 29 00:57:57 UTC 2017


> On Dec 28, 2017, at 3:28 PM, Brock Tice <brock at bmwl.co> wrote:
> 
> We are currently handing out /52s to customers. Based on a reasonable
> sparse allocation scheme that would account for future growth that
> seemed like the best option.

Could you detail the reasoning behind your allocation scheme?  I.e., what are the assumptions you're making about customers deploying hardware? How will they need those devices isolated?  What data fed the model you used to come up with those numbers?

I ask because I have seen many ISPs advocate for smaller than /48 customer allocations, but I haven't seen anyone present the model they used to come up with those numbers.  I really am curious to know the assumptions and rationale behind the various allocation schemes ISPs are coming up with.

> I can't really see how /52 is too small for a residential customer. I
> know originally it was supposed to be /48 but after doing a bit of
> reading I think many people have admitted there is room for nuance.

What reading?  Can you provide pointers to the documents you were reading?  Again, I'm curious to understand how and why ISPs are making these decisions.

Also, the fact that you "can't see it" doesn't mean they (or someone else) can't or won't.  An ISP's job is to shovel packets around.  No more, no less.

> Do you think I could go to ARIN and say, well, we haven't used hardly
> any of this but based on such-and-such allocation scheme, it would be
> much better if you gave us a /32 instead of a /36?

Hardly used any of what?  Are you talking about density of the customer hosts inside each of these /64 subnets?  This is where I think the biggest misunderstandings of the IPv6 allocation strategy comes from.

Ask yourself this: do you think the intention was to have 2^64 hosts on a single LAN segment?  Can you imagine any practical switch fabric that could handle that?  (I'd be curious to know the size of the largest - in the number of hosts sense - 10-Gig Ethernet LAN anyone has deployed.)  The number of hosts per /64 will always be limited by the associated switch hardware.  This will be true until the universe collapses, I suspect.

> Also, does anyone know whether ARIN is using sparse allocation, such
> that if we go back later and ask for more they will just increase the
> size of our allocation starting from the same point?

You could just ask them.  But the policies for ISP allocations (last time I read them) makes it pretty straight forward for you to get a block that fits your growth needs for the foreseeable future.†

But really, if you are worried about having to advertise, say, eight IPv6 prefixes to the DFZ for all your allocations, haven't you just argued against the fragmented /52 allocations to your downstream customers?

You need to treat IPv6 addresses as being 64 bits long.  Those extra 64 bits on the right are just noise – ignore them.  Instead, think about how we can carve up a 2^61 address space (based on the current /3 active global allocation pool) between 2^32 people (Earth's current population), each having 2^16 devices, needing their own network.  That makes for a densely allocated /48 for each person on the planet.  (Coincidence?)  But when we get to the point of filling up that /3, we still have five more /3s to work with.

Now think about scaling.  If the population doubles, we're now down to four spare /3s.  If that doubled population doubles the number of devices, we're down to two spare /3s.  If the population doubles again, there will be no civilization left, let alone an Internet.  Etc.  So realistically, the current address space allocation policies can handle a doubling of the planet's population, with each person having a quarter of a million addressable nodes.  Each node having its own /64 to address individual endpoints within whatever that 'node' represents.  Just think, 2^64 port-443 HTTPS servers per "thing."  Isn't this the utopia we've been seeking out?

I'm pretty confident IPv6 as a protocol (and, really, IP as a networking concept) will be dead *long* before we run out of address space.  Not because we run out the numbers of bits allocated to hosts, or subnets, or ports; but because the current topology of routed networks won't fit with what we want or need to do in the future.  (My prediction is that everything will move to adhoc meshes, with no control planes at all.  But that's completely out of scope for this discussion.)


--lyndon

† https://www.arin.net/resources/ipv6_planning.html states that ISP allocations for > /32 block sizes is based on a /48 per customer site allocation policy.





More information about the NANOG mailing list