AWS Elastic IP architecture

Christopher Morrow morrowc.lists at gmail.com
Mon Jun 1 15:52:15 UTC 2015


On Mon, Jun 1, 2015 at 11:41 AM, Tony Hain <alh-ietf at tndh.net> wrote:
>
>
>> -----Original Message-----
>> From: NANOG [mailto:nanog-bounces at nanog.org] On Behalf Of
>> Christopher Morrow
>> Sent: Monday, June 01, 2015 7:24 AM
>> To: Matt Palmer
>> Cc: nanog list
>> Subject: Re: AWS Elastic IP architecture
>>
>> On Mon, Jun 1, 2015 at 1:19 AM, Matt Palmer <mpalmer at hezmatt.org>
>> wrote:
>> > On Sun, May 31, 2015 at 10:46:02PM -0400, Christopher Morrow wrote:
>> >> So... ok. What does it mean, for a customer of a cloud service, to be
>> >> ipv6 enabled?
>> >
>> > IPv6 feature-parity with IPv4.
>> >
>> > My must-haves, sorted in order of importance (most to least):
>> >
>> >> o Is it most important to be able to terminate ipv6 connections (or
>> >> datagrams) on a VM service for the public to use?
>> >>
>>
>> and would a headerswapping 'proxy' be ok? there's (today) a 'header
>> swapping proxy' doing 'nat' (sort of?) for you, so I imagine that whether the
>> 'headerswapping' is v4 to v4 or v6 to v4 you get the same end effect:
>> "People can see your kitten gifs".
>>
>> >> o Is it most important to be able to address every VM you create with
>> >> an ipv6 address?
>>
>> why is this bit important though? I see folk, I think, get hung up on this, but I
>> can't figure out WHY this is as important as folk seem to want it to be?
>>
>> all the vms have names, you end up using the names not the ips... and thus
>> the underlying ip protocool isn't really important? Today those names
>> translate to v4 public ips, which get 'headerswapped' into v4 private
>> addresses on the way through the firehedge at AWS. Tomorrow they may
>> get swapped from v6 to v4... or there may be v6 endpoints.
>>
>> >> o Is it most important to be able to talk to backend services
>> >> (perhaps at your prem) over ipv6?
>> >
>> > If, by "backend services", you mean things like RDS, S3, etc, this is
>> > in the right place.
>> >
>>
>> I meant 'your oracle financials installation at $HOMEBASE'. Things like
>> 'internal amazon services' to me are a named endpoint and:
>>   1) the name you use could be resolving to something different than the
>> external view
>>   2) it's a name not an ip version... provided you have the inny and it's an
>> outy, I'm not sure that what ip protocol you use on the RESTful request
>> matters a bunch.
>>
>> >> o Is it most important that administrative interfaces to the VM
>> >> systems (either REST/etc interfaces for managing vms or 'ssh'/etc) be
>> >> ipv6 reachable?
>> >>
>> >> I don't see, especially if the vm networking is unique to each
>> >> customer, that 'ipv6 address on vm' is hugely important as a
>> >> first/important goal. I DO see that landing publicly available
>> >> services on an ipv6 endpoint is super helpful.
>> >
>> > Being able to address VMs over IPv6 (and have VMs talk to the outside
>> > world over IPv6) is *really* useful.  Takes away the need to NAT anything.
>>
>> but the nat isn't really your concern right (it all happens magically for you)?
>> presuming you can talk to 'backend services' and $HOMEBASE over ipv6
>> you'd also be able to make connections to other v6 endpoints as well.
>> there's little difference REALLY between v4 and v6 ... and jabbing a
>> connection through a proxy to get v6 endpoints would work 'just fine'.
>> (albeit protocol limitations at the higher levels could be interesting if the
>> connection wasn't just 'swapping headers')
>>
>> >> Would AWS (or any other cloud provider that's not currently up on the
>> >> v6 bandwagon) enabling a loadbalanced ipv6 vip for your public
>> >> service (perhaps not just http/s services even?) be enough to relieve
>> >> some of the pressure on other parties and move the ball forward
>> >> meaningfully enough for the cloud providers and their customers?
>> >
>> > No.  I'm currently building an infrastructure which is entirely
>> > v6-native internally; the only parts which are IPv4 are public-facing
>> > incoming service endpoints, and outgoing connections to other parts of
>> > the Internet, which are proxied.  Everything else is talking amongst
>> > themselves entirely over IPv6.
>>
>> that's great, but I'm not sure that 'all v6 internally!' matters a whole bunch? I
>> look at aws/etc as "bunch of goo doing
>> computation/calculation/storage/etc" with some public VIP (v4, v6,
>> etc) that are well defined and which are tailored to your userbase's
>> needs/abilities.
>>
>> You don't actually ssh to 'ipv6 literal' or 'ipv4 literal', you ssh to
>> 'superawesome.vm.mine.com' and provide http/s (or whatever) services via
>> 'external-service-name.com'. Whether the 1200 vms in your private network
>> cloud are ipv4 or ipv6 isn't important (really) since they also talk to
>> eachother via names, not literal ip numbers. There isn't NAT that you care
>> about there either, the name/ip translation does the right thing (or should)
>> such that 'superawesome.vm.availzone1.com' and
>> 'superawesome.vm.availzone2.com' can chat freely by name without
>> concerns for underlying ip version numbers used (and even without caring
>> that 'chrissawesome.vm.availzone1.com' is 10.0.0.1 as well.
>
> Look at the problem in the other direction, and you will see that addresses often matter. What if you want to deny ssh connections from a particular address range?

this sounds like a manage-your-vm question... keep that on v4 only
until native v6 gets to your vm. (short term vs long term solution
space)


>The source isn't going to tell you the name it is coming from. What if you want to deploy an MTA on your VM and block connections from SPAM-A-LOT data centers? How do you do that when the header-swap function presents useless crap from an external proxy mapping function?
>

yup, 'not http' services are harder to deal with in a 'swap headers' world.

> That said, if you double nat from the vip to the stack in a way that masks the internal transport of the service (so that a native stack on the VM behaves as if it is directly attached to the outside world), then it doesn't matter what the service internal transport is.
>

I was mostly envisioning this, but the header-swap seems easy for a
bunch of things (to me at least).

> What I read in your line of comments to Owen is that the service only does a header swap once and expects the application on the VM to compensate. In that case there is an impact on the cost of deployment and overall utility.

'compensate' ? do you mean 'get some extra information about the real
source address for further policy-type questions to be answered' ?

I would hope that in the 'header swap' service there's as little
overhead applied to the end system as possible... I'd like my apache
server to answer v6 requests without having a v6
address-listening-port on my machine. For 'web' stuff
'X-forwarded-for' seems simple, but breaks for https :(

Oh, so what if the 'header swap' service simply shoveled the v6 into a
gre (or equivalent) tunnel and dropped that on your doorstep?
potentially with an 'apt-get install aws-tunnelservice'  ? I would bet
in the 'vm network' you could solve a bunch of this easily enough, and
provide a v6 address inside the tunnel on the vm providing the
services.

loadbalancing is a bit rougher (more state management) but .. is doable.

-chris



More information about the NANOG mailing list