Companies using public IP space owned by others for internal routing

Owen DeLong owen at delong.com
Thu Dec 21 15:54:04 UTC 2017


> On Dec 18, 2017, at 15:09 , William Herrin <bill at herrin.us> wrote:
> 
> On Sun, Dec 17, 2017 at 11:31 PM, Eric Kuhnke <eric.kuhnke at gmail.com> wrote:
> 
>> some fun examples of the size of ipv6:
>> 
>> https://samsclass.info/ipv6/exhaustion-2016.htm
>> 
>> https://www.reddit.com/r/theydidthemath/comments/
>> 2qxgxw/self_just_how_big_is_ipv6/
> 
> 
> Hi Eric,
> 
> Lies, damn lies and statistics. Both projections assume that IPv6 addresses
> are assigned the same way we assign IPv4 addresses. They are not.
> 
> There are several practices which consume IPv6 at a drastically higher rate
> than IPv4. The most notable is the assignment of a /64 to every LAN. Your
> /26 LAN that used to consume 2^6th IP addresses? Now it's 2^64th. Used to
> consume RFC1918 addresses? Now it's 2^64th of the global IPv6 addresses.
> 
> Why did we need a /64 for each LAN? So we could incorporate the Ethernet
> MAC address in to the IP address. Only we can't actually do that because it
> turns out to be crazy insecure. Nevertheless, the 3 computers in your
> basement will still consume 2^64th IPv6 addresses between them. But hey,
> what's 20 orders of magnitude between friends.
> 
> We have ISPs that have received allocations of entire /19s. A /19 in IPv6
> is exactly the same percentage of the total address space as a /19 in IPv4.
> Before considering reserved addresses, it's 1/2^19th of the total address
> space. For a single ISP. Think about it.

Sure… However, we have a few very large ISPs that have
received /19s (~1/524,288 of the total space each). Unlike IPv4
where we have thousands and thousands of ISPs and even some
end users that have more than a /19.

Do you really think we’re anywhere near likely to have 500 ISPs world
wide that get /19s, let alone 524,288?

An IPv6 /19 provides enough addressing in IPv6 to give out 2^29 or
roughly 536 million customers /48s. Even if we allow for pretty large
overhead, there’s still plenty of address space there for ~100 million
customer /48s. Let’s round up the 7 billion current population to 10
for the sake of conservatism, even still we only need roughly 100
ISPs that size to handle everyone on the planet. If we allow for some
overlap and then quadruple the requirement to cover content-side
needs in addition to eyeballs, that’s still less than 1,000 /19
sized allocations needed. So, even at that rate, we’ve still got 523,288
/19s left.

> Meanwhile the IETF has learned nothing from the gargantuan waste that is
> 224.0.0.0/4 ($2billion at current prices). They went and assigned FC00::/7.
> /7!! Almost 1% of the IPv6 address space gone in a single RFC.

As much as I consider the fc00::/7 reservation to be ridiculous, the reality
is that RFC-1918 wasn’t that much short of the 1% mark in IPv4, either.
The reservation of fc00::/7 is much more analogous to RFC-1918 than to
224.0.0.0/4. In fact, fd00::/8 is analogous to 224.0.0.0/4 and I think
that use makes sense in both cases. While multicast didn’t take off or
work out well in IPv4, a smaller address space would only have made that
worse.

I think that your real target was 240.0.0.0/4, which was set aside for
experimental and other undefined purposes and never really got used for
much of anything.

Unfortunately, if you want to claim the IETF didn’t lear the lesson
there, it’s a bit harder since there’s no such reservation in IPv6,

> I haven't attempted to compute the actual rate of IPv6 consumption but it's
> not inconceivable that we could exhaust them by the end of the century
> through sheer ineptitude.

Well, to the best of my knowledge, no RIR has yet requested a second /12.
I’ll be surprised if we burn through the first /3 using current allocation
practices within my lifetime. If we do, then I’ll join your quest to burden
IPv6 with more restrictive allocation practices for the remaining 5
virgin /3s and the remaining space in the other 2.

> On the plus side, we're mostly only screwing around with 2000::/3 right
> now. After we burn through that in the next 20 years, we can if we so
> desire change the rules for how (and how quickly) we use 4000::/3.

However, even if your prediction somehow comes true and we don’t change
the rules, that gives us roughly a 120 year timeframe for running out
of the existing /3 and the remaining 5 /3s that have not been touched,
without even invading ::/3 or e000::/3 (the first and last /3s which
each have some carve-outs, but remain mostly free space as well).

If we don’t end up needing to fix other things and replace the codebase
with something that would allow us to redo the address space in the
next 120 years, I’ll be quite surprised.

Owen




More information about the NANOG mailing list