[180413] in North American Network Operators' Group
Re: AWS Elastic IP architecture
daemon@ATHENA.MIT.EDU (Owen DeLong)
Tue Jun 2 05:14:51 2015
X-Original-To: nanog@nanog.org
From: Owen DeLong <owen@delong.com>
In-Reply-To: <556C9B15.7010403@matthew.at>
Date: Tue, 2 Jun 2015 10:12:27 +0100
To: Matthew Kaufman <matthew@matthew.at>
Cc: nanog@nanog.org
Errors-To: nanog-bounces@nanog.org
> On Jun 1, 2015, at 6:49 PM, Matthew Kaufman <matthew@matthew.at> =
wrote:
>=20
> On 6/1/2015 12:06 AM, Owen DeLong wrote:
>> ... Here=E2=80=99s the thing=E2=80=A6 In order to land IPv6 services =
without IPv6 support on the VM, you=E2=80=99re creating an environment =
where...
>=20
> Let's hypothetically say that it is much easier for the cloud provider =
if they provide just a single choice within their network, but allow =
both v4 and v6 access from the outside via a translator (to whichever =
one isn't native internally).
>=20
> Would you rather have:
> 1) An all-IPv6 network inside, so the hosts can all talk to each other =
over IPv6 without using (potentially overlapping copies of) RFC1918 =
space... but where very little of the open-source software you build =
your services on works at all, because it either doesn't support IPv6 or =
they put some IPv6 support in but it is always lagging behind and the =
bugs don't get fixed in a timely manner. Or,
Yes.
For one thing, if AWS did this, the open source software would very =
quickly catch up and IPv4 would be the stack no longer getting primary =
maintenance in that software.
Additionally, it=E2=80=99s easy for me to hide an IPv4 address in a =
translated to IPv6 packet header. Not so much the other way around.
> 2) An all-IPv4 network inside, with the annoying (but well-known) use =
of RFC1918 IPv4 space and all your software stacks just work as they =
always have, only now the fraction of users who have IPv6 can reach them =
over IPv6 if they so choose (despite the connectivity often being worse =
than the IPv4 path) and the 2 people who are on IPv6-only networks can =
reach your services too.
There are a lot more than 2 people on IPv6-only networks at this point. =
Most T-Mobile Droid users, for example are on an IPv6-only network at =
this time. True, T-Mo provides 464Xlat services for those users (which =
is why T-Mo doesn=E2=80=99t work for iOS users), but that=E2=80=99s =
still an IPv6-only network.
> Until all of the common stacks that people build upon, including =
distributed databases, cache layers, web accelerators, etc. all work =
*better* when the native environment is IPv6, everyone will be choosing =
#2.
To the best of my knowledge, Postgress, MySQL, Oracle, NoSQL all support =
IPv6.
Squid and several other caches have full IPv6 support.
We don=E2=80=99t even need better=E2=80=A6 We just need at least equal.
However, if, instead of taking a proactive approach and deploying IPv6 =
in a useful way prior to calamity you would rather wait until the layers =
and layers of hacks that are increasingly necessary to keep IPv4 alive =
reach such a staggering proportion that no matter how bad the IPv6 =
environment may be, IPv4 is worse, I suppose that=E2=80=99s one way we =
can handle the transition.
Personally, I=E2=80=99d rather opt for #1 early and use it to drive the =
necessary improvements to the codebases you mentioned and probably some =
others you didn=E2=80=99t.
> And both #1 and #2 are cheaper and easier to manage that full =
dual-stack to every single host (because you pay all the cost of =
supporting v6 everywhere with none of the savings of not having to deal =
with the ever-increasing complexity of continuing to use v4)
Sure, sort of. You actually do get some savings with dual-stack because =
you reduce the cost and the rate at which the iPv4 complexity continues =
to increase. You also reduce the time before you=E2=80=99ll be able to =
fully deprecate IPv4 and the amount of additional capital that has to be =
expended hacking, cobbling, and otherwise creating cruft for the sole =
purpose of keeping IPv4 alive and somewhat functional.
Owen