Credit to Digital Ocean for ipv6 offering

Owen DeLong owen at delong.com
Tue Jun 17 21:13:56 UTC 2014


On Jun 17, 2014, at 13:36 , Grzegorz Janoszka <Grzegorz at Janoszka.pl> wrote:

> On 2014-06-17 22:13, David Conrad wrote:
>> On Jun 17, 2014, at 12:55 PM, Grzegorz Janoszka <Grzegorz at Janoszka.pl> wrote:
>>> There are still applications that break with subnet smaller than /64, so all VPS providers probably have to use /64 addressing.
>> 
>> Wouldn't that argue for /64s?
> 
> /64 netmask, but not /64 for a customer. There are application which break if provided with /80 or /120, but I am not aware of an application requesting /64 for itself.
> 
>>> /64 for one customer seems to be too much,
>> 
>> In what way? What are you trying to protect against? It can't be address exhaustion (there are 2,305,843,009,213,693,952 possible /64s in the currently used format specifier. If there are 1,000,000,000 customer assignments every day of the year, the current format specifier will last over 6 million years).
> 
> Too much hassle, like too big config of your router. If you have 1000 customers in a subnet, you would have to have 1000 separate gateway IP's on your router interface plus 1000 local /64 routes.
> 
> -- 
> Grzegorz Janoszka

This is actually pretty easy. If I were structuring a VPS environment, then I'd put a /56 or possibly a /52, depending on the number of virtuals expected on each physical server. Then, for each customer who got a VPS on that server, I'd create a bridge interface with a /64 assigned to that customer. Each VPS on that physical server that belonged to the same customer would get put on the same /64.

The router would route the /56 or /52 to the physical server. The hypervisor would have connected routes for the subordinate /64s and provide RAs to give default to the various VPSs.

Very low maintenance, pretty straight forward and simple.

Why would you ever put multiple customers in the same subnet in IPv6? That's just asking for trouble if you ask me.

Owen




More information about the NANOG mailing list