Wisdom of using 100.64/10 (RFC6598) space in an Amazon VPC deployment

Owen DeLong owen at delong.com
Tue Feb 24 17:38:37 UTC 2015


As one of the authors involved in what eventually became RFC6598, this isn’t entirely accurate.

100.64/10 is intended as space to be used by service providers for dealing with situations where additional shared private address space is required, but it must be distinct from the private address space in use by their customers. Stacked NAT in a CGN scenario is only the most common example of such a situation.

The application described is another example, though if the application provider’s customers are with ISPs that start doing CGN and using RFC6598 for that process, some difficulties could arise which the application provider would have to be prepared to cope with.

Owen

> On Feb 23, 2015, at 10:52 , Benson Schliesser <bensons at queuefull.net> wrote:
> 
> Hi, Eric -
> 
> Bill already described the salient points. The "transition" space is meant to be used for cases where there are multiple stacked NATs, such as CGN with CPE-based NAT. In theory, if the NAT implementations support it, one could use it repeatedly by stacking NAT on top of NAT ad nauseum, but the wisdom of doing so is questionable. If one uses it like additional RFC1918 space then routing could become more difficult, specifically in the case where hosts (e.g. VPC servers) are numbered with it. This is true because, in theory, you don't need the transition space to be routed on the "internal" network which avoids having NAT devices hold conflicting routes etc. Even if the edge NAT devices don't currently see conflicting routes to 100.64/10, if that changes in the future then client hosts may find themselves unable to reach the VPC hosts at that time.
> 
> That being said, if you understand the risks that I described above, then it may work well for a "community of interest" type of inter-network that hosts non-global resources. From your description it sounds like that might be the situation you find yourself in. To be clear, it's not unwise to do so, but it does carry risk that needs to be evaluated (and documented).
> 
> Cheers,
> -Benson
> 
> 
>> William Herrin <mailto:bill at herrin.us>
>> February 23, 2015 at 12:58 PM
>> 
>> Hi Eric,
>> 
>> The main risk is more or less as you summarized it. Customer has no
>> firewall or originates the VPN directly from their firewall. Customer
>> buys a non-hosting commodity Internet product that uses carrier NAT to
>> conserve IP addresses. The customer's assigned address AND NETMASK
>> combined overlap some of the hosts you're trying to publish to them.
>> 
>> 
>> 
>> Mitigations for that risk:
>> 
>> Can you insist that the customer originate connections from inside
>> their firewall (on RFC1918 space)?
>> 
>> Most service providers using 100.64/10 either permit customers to opt
>> out (getting dynamic globally routable addresses) or offer customers
>> the opportunity to purchase static global addresses for a nominal fee.
>> Are you comfortable telling impacted customers that they have to do
>> so?
>> 
>> 
>> A secondary risk comes in to play where a customer may wish to
>> interact with another service provider doing the same thing as you.
>> That essentially lands you back in the same problem you're having now
>> with RFC1918.
>> 
>> 
>> One more question you should consider: what is the nature of your
>> customer's networks? Big corps that tend to stretch through 10/8 won't
>> let their users originate VPN connections in the first place. They
>> also don't touch 100.64/10 except where someone is publishing a
>> service like yours. Meanwhile, home and SOHO users who are at liberty
>> to originate VPNs might currently hold a 100.64/10 address. But they
>> just about never use the off-bit /16s in 10/8. By off-bit I mean the
>> ones with 4 or 5 random 1-bits in the second octet.
>> 
>> 
>> My opinion: The likelihood of collisions in 100.64/10 increases
>> significantly if you use them on servers. I would confine my use to
>> client machines and try to put servers providing service to multiple
>> organizations on globally unique IPs. Confining 100.64/10 to client
>> machines, you're unlikely to encounter a problem you can't readily
>> solve.
>> 
>> Regards,
>> Bill Herrin
>> 
>> 
>> Eric Germann <mailto:ekgermann at cctec.com>
>> February 23, 2015 at 10:02 AM
>> Currently engaged on a project where they’re building out a VPC infrastructure for hosted applications.
>> 
>> Users access apps in the VPC, not the other direction.
>> 
>> The issue I'm trying to get around is the customers who need to connect have multiple overlapping RFC1918 space (including overlapping what was proposed for the VPC networks). Finding a hole that is big enough and not in use by someone else is nearly impossible AND the customers could go through mergers which make them renumber even more in to overlapping 1918 space.
>> 
>> Initially, I was looking at doing something like (example IP’s):
>> 
>> 
>> Customer A (172.28.0.0/24) <—> NAT to 100.127.0.0/28 <——> VPN to DC <——> NAT from 100.64.0.0/18 <——> VPC Space (was 172.28.0.0/24)
>> 
>> Classic overlapping subnets on both ends with allocations out of 100.64.0.0/10 to NAT in both directions. Each sees the other end in 100.64 space, but the mappings can get tricky and hard to keep track of (especially if you’re not a network engineer).
>> 
>> 
>> In spitballing, the boat hasn’t sailed too far to say “Why not use 100.64/10 in the VPC?”
>> 
>> Then, the customer would be allocated a /28 or larger (depending on needs) to NAT on their side and NAT it once. After that, no more NAT for the VPC and it boils down to firewall rules. Their device needs to NAT outbound before it fires it down the tunnel which pfSense and ASA’s appear to be able to do.
>> 
>> I prototyped this up over the weekend with multiple VPC’s in multiple regions and it “appears” to work fine.
>> 
>> From the operator community, what are the downsides?
>> 
>> Customers are businesses on dedicated business services vs. consumer cable modems (although there are a few on business class cable). Others are on MPLS and I’m hashing that out.
>> 
>> The only one I can see is if the customer has a service provider with their external interface in 100.64 space. However, this approach would have a more specific in that space so it should fire it down the tunnel for their allocated customer block (/28) vs. their external side.
>> 
>> Thoughts and thanks in advance.
>> 
>> Eric
>> 
>> 




More information about the NANOG mailing list