Dual stack IPv6 for IPv4 depletion

Mel Beckman mel at beckman.org
Thu Jul 9 02:34:58 UTC 2015


None of those applications benefit from address mapping. They can be done with IPv6 as it stands today. This is where the atoms argument you don't want us to make comes in :)

-mel via cell

> On Jul 8, 2015, at 7:27 PM, Israel G. Lugo <israel.lugo at lugosys.com> wrote:
> 
> I'm sorry Mel, I only now saw your email.
> 
> I'll quote from my reply to Owen, for the motivation behind my question:
> 
>> Speaking of IPv6's full potential: we're considering 32 subscriptions
>> per client. I've read people thinking of things like IPv6-aware soda
>> cans. Refrigerators. Wearables. Cars and their internal components...
>> You could have the on-board computer talking to the suspension via IPv6,
>> and reporting back to the manufacturer or whatnot.
>> 
>> Personally, I'm not particularly fond of the whole "refrigerators
>> ordering milk bottles" craze, but hey, it may very well become a thing.
>> And other stuff we haven't thought of yet.
>> 
>> My point is: we're changing to a brand new protocol, and only now
>> beginning to scratch its full potential. Yes, everything seems very big
>> right now. Yes, 128 bits can be enough. Even 64 bits could be more than
>> enough. But why limit ourselves? Someone decided (corretly) that 64
>> would be too limiting.
>> 
>> Please don't fall into the usual "you've got more addresses than
>> atoms"... I've heard that, and am not disputing it. I'm not just talking
>> about individual addresses (or /48's).
>> 
>> What I am proposing here, as food for thought, is: what if we had e.g.
>> 192 bits, or 256? For one, we could have much sparser allocations. Heck,
>> we could even go as far as having a bit for each day of the month. What
>> would this be good for? I don't know. Perhaps someone may come up with a
>> use for it.
> 
> Regards,
> Israel
> 
> 
> 
>> On 07/09/2015 02:46 AM, Mel Beckman wrote:
>> Israel,
>> 
>> A better question is why bit-map your allocation plan at all? That seems ill advised, since you must arbitrarily allocate huge swaths of ip space equally between category classes when it's rarely efficient to do so. For example, two bits for network infrastructure because infrastructure addresses are likely far fewer than any customer class. Similarly three bits for geographic region on the /38 boundary illogically assumes all geographic regions are the same size.
>> 
>> There isn't a good routing reason for a bitwise address structure, since nobody routes that way. The only other rationale I can think of is human mnemonic value, but 128-bit addresses are not very amenable to such mnemonics (::DEAD:BEEF not withstanding :)
>> 
>> -mel beckman
>> 
>>> On Jul 8, 2015, at 6:32 PM, Owen DeLong <owen at delong.com> wrote:
>>> 
>>> 
>>>> Let's say I'm a national ISP, using 2001:db8::/32. I divide it like so:
>>>> 
>>>> - I reserve 1 bit for future allocation schemes, leaving me a /33;
>>>> - 2 bits for network type (infrastructure, residential, business, LTE): /35
>>>> - 3 bits for geographic region, state, whatever: /38
>>>> - 5 bits for PoP, or city: /43
>>>> 
>>>> This leaves me 5 bits for end sites: no joy.
>>> Here’s the problem… You started at the wrong end and worked in the wrong direction in your planning.
>>> 
>>> Let’s say you’re a national ISP. Let’s say you want to support 4 levels of aggregation.
>>> Let’s say that at the lowest level (POP/City) you serve 50,000 end-sites in your largest POP/City. (16 bits)
>>> Let’s say you plan to max out at 32 POPs/Cities per region (your number from above) (5 bits)
>>> Let’s say you plan to divide the country into 8 regions (3 bits)
>>> Let’s say for some reason you want to break your aggregation along the lines of service class (infrastructure, residential, business)
>>>   as your top level division (rarely a good idea, but I’ll go with it for now) and that you have 4 service classes (2 bits)
>>> Further, let’s say you decide to set aside half your address space for “future allocation schemes”.
>>> 
>>> Each POP needs a /32.
>>> We can combine the Region/POP number into an 8-bit field — You need a /24 per Region.
>>> You need 3 additional bits for your higher level sub-divisions. Let’s round to a nibble boundary and give you a /20.
>>> 
>>> With that /20, you can support up to 67 Million end sites in your first plan still leaving 3/4 of your address space fallow.
>>> 
>>> (That’s at /48 per end-site, by the way).
>>> 
>>> Now, let’s consider: 7 Billion People, each of which represents 32 different end-sites — 224 billion end-sites world wide.
>>> 224,000,000,000 / 67,000,000 = 3,344 (rounded up) total ISPs requiring /20s to serve every possible end-site on the
>>> planet.
>>> 
>>> 
>>> There are 1,048,576 /20s total, so after allocating all the ISPs in the world /20s, we still have 1,045,232 remaining.
>>> 
>>> Let’s assume that every end-site goes with dual-address multi-homing (an IPv6 prefix from each provider).
>>> 
>>> We are now left with only 1,041,888 /20s remaining. You still haven’t put a dent in it.
>>> 
>>> Even if we divide by 8 and just consider the current /3 being allocated as global unicast, you still have 130,236 free /20s
>>> left.
>>> 
>>>> Granted, this is just a silly example, and I don't have to divide my
>>>> address space like this. In fact, I really can't, unless I want to have
>>>> more than 32 customers per city. But I don't think it's a very
>>>> far-fetched example.
>>> It’s not… It’s a great example of how not to plan your address space in IPv6.
>>> 
>>> However, if we repeat the same exercise in the correct direction, not only does each of your end-sites get a /48, you get the /20 you need in order to properly deploy your network. You get lots of space left over, and we still don’t make a dent in the IPv6 free pool. Everyone wins.
>>> 
>>>> Perhaps I'm missing something obvious here, but it seems to me that it
>>>> would've been nice to have these kinds of possibilities, and more. It
>>>> seems counterintuitive, especially given the "IPv6 way of thinking"
>>>> which is normally encouraged: "stop counting beans, this isn't IPv4”.
>>> The problem is that you not only stopped counting beans, you stopped counting bean piles and you lost track of just how big the pile that you are making smaller piles from really is.
>>> 
>>> I hope that this will show you a better way.
>>> 
>>> Owen
> 



More information about the NANOG mailing list