AWS Elastic IP architecture

Tony Hain alh-ietf at tndh.net
Mon Jun 1 17:18:11 UTC 2015


>>> snip

> > What I read in your line of comments to Owen is that the service only does
> a header swap once and expects the application on the VM to compensate.
> In that case there is an impact on the cost of deployment and overall utility.
> 
> 'compensate' ? do you mean 'get some extra information about the real
> source address for further policy-type questions to be answered' ?

Yes. Since that is not a required step on a native machine, there would be development / extra configuration required. While people that are interested in IPv6 deployment would likely do the extra work, those who "just want it to work" would delay IPv6 services until someone created the magic. Unfortunately that describes most of the people that use hosted services, so external proxy / nat approaches really do nothing to further any use of IPv6.

> 
> I would hope that in the 'header swap' service there's as little overhead
> applied to the end system as possible... I'd like my apache server to answer
> v6 requests without having a v6 address-listening-port on my machine. For
> 'web' stuff 'X-forwarded-for' seems simple, but breaks for https :(

So to avoid the exceedingly simple config change of "Listen 80" rather than "Listen x.x.x.x:80" you would rather not open the IPv6 port? If the service internal transport is really transparent, https would work for free. I don't have any data to base it on, but I always thought that scaling an e-commerce site was the primary utility in using a hosted VM service. If that is true, it makes absolutely no sense to do a proxy VIP thingy for IPv6 port 80 to fill the cart, then fail the connection when trying to check-out. As IPv4 becomes more fragile with the additional layering of nats, the likelihood of that situation goes up, causing even more people to want to turn off the IPv6 vip. It is better for the service to appear to be down at the start than to have customers spend time then fail at the point of gratification, because they are much more likely to forget about an apparent service outage than to forgive wasting their time.

> 
> Oh, so what if the 'header swap' service simply shoveled the v6 into a gre
> (or equivalent) tunnel and dropped that on your doorstep?
> potentially with an 'apt-get install aws-tunnelservice'  ? I would bet in the
> 'vm network' you could solve a bunch of this easily enough, and provide a v6
> address inside the tunnel on the vm providing the services.
> 
> loadbalancing is a bit rougher (more state management) but .. is doable.

I think tunneling would be more efficient and manageable overall. I have not thought through the trade-offs between terminating it on the host vs inside the VM, but gut feel says that for the end-user / application it might be better inside the vm so there is a clean interface, while for service manageability it would be better on the host, even though some information might get lost in the interface translation. As long as the IP header that the VM stack presents to the application is the same as the one presented to the vip (applies outbound as well), the rest is a design detail that is best left to each organization.

Tony





More information about the NANOG mailing list