[140561] in North American Network Operators' Group

home help back first fref pref prev next nref lref last post

RE: Interested in input on tunnels as an IPv6 transition technology

daemon@ATHENA.MIT.EDU (Tony Hain)
Fri May 13 13:56:21 2011

From: "Tony Hain" <alh-ietf@tndh.net>
To: "'Karl Auer'" <kauer@biplane.com.au>,
	"'NANOG List'" <nanog@nanog.org>
In-Reply-To: <1305265953.18376.1032.camel@karl>
Date: Fri, 13 May 2011 10:56:11 -0700
Errors-To: nanog-bounces+nanog.discuss=bloom-picayune.mit.edu@nanog.org

Fundamentally tunneling allows you to introduce the new technology while =
you work through budgeting / amortization-of-legacy / =
resistance-to-change issues. The Internet as we know it was built as a =
tunnel overlay to the voice system, and the underlying operators of that =
time said the overlay couldn't work or was sub-optimal, just like we =
hear today as a new generation of overlay is deployed.

There is a serious stalemate between the transit/access system which =
won't/can't invest in something new without a rapid ROI as demonstrated =
by a wildly popular application, and the application/content world that =
won't/can't invest in new applications without a delivery mechanism in =
place. Tunnels help break that stalemate.=20

That said, at scale configuring and debugging an array of tunnels is an =
operational pain which will quickly outstrip the cost for just deploying =
the new technology natively. In the tunnel-over-voice model, hubs were =
set up and the end user was expected to configure their end of the =
tunnel to find the closest hub. Works well enough, and tunnel brokers =
are doing the same for IPv6 over IPv4 today. The issue is that we left =
the dial model behind and people just expect their magic CPE thing to =
take care of it automatically to stay connected at all times. Enter the =
automated tunnels...

People on this list will complain all day about how bad an idea =
automated tunnels are, but at the end of the day the point of automated =
tunnels is to get technology deployed to places where those who are =
complaining have not done so yet, and then doing that at scale without =
changing end user expectations of magic in the CPE. To put it a =
different way, the IPv4 operators have become the lethargic core that =
they complained about so vehemently 15 years ago, so now the $50 CPE has =
to find a way past them to stay always-connected no matter what rate =
their ISP rolls the IPv4 DHCP pool, or how many layers of nat get put in =
the way.=20

For the specific complaint about MSFT putting tunneling in the host, you =
can complain to me because I drove that model (I have not worked there =
for over 10 years, but I was the PM for IPv6 in XP). The entire point =
was to break the stalemate and get apps deployed. People on this list =
generally are network operators and frequently get myopic about what =
problems need to be solved. For the specific problem of app development, =
there has to be a trust that it is worthwhile starting the development, =
so the API has to work no matter what impediments there are in the =
network. Given that you have to have an app deployed to demonstrate the =
need for the network operators to make the investment to improve its =
performance, making the API work means the host has to originate the =
tunnel, at least initially. That said, the original XP implementation =
deferred to any native service and that is the way all implementations =
of automated tunneling should work (I make the point because the rewrite =
of the stack for Vista/W7 broke this deferral for the isatap tunnel =
type, and as of the last time I checked it has not been fixed yet). =
Making the API work also means that multiple approaches to tunneling are =
required to deal with the various network topologies that the end system =
will find itself in. From the app developer perspective, none of that =
should matter as the OS stack should take care of masking whatever =
contortions it has to do for bit delivery. That almost works except for =
RTT and MTU issues which cause performance degradation relative to the =
underlying legacy protocol.

I can already hear the responses about how people are demonstrating the =
failure modes of automated tunnels. My response is that those who =
protest are taking the religious position of 'my content is available =
via the RIR allocated address and you will come to me', which forces =
traffic through a 3rd party transit intermediary when they could just as =
easily add an address for direct support of the automated tunneling =
world and all of the 3rd party issues they complain about as failures =
would vaporize. There would still be firewall induced failures, but the =
entire point of firewalls is to cause a failure for services the =
firewall operator is not prepared to deal with. There is a simple enough =
work around for that by daisy chaining automated tunnel technologies =
with an intermediate firewall, so it doesn't have to be the crisis it is =
being made out to be.

While IPv4 creates a nice global NBMA network (to pass what from an IPv6 =
perspective are logically layer-2 frames around), tunnels create a =
challenge for diagnostics because there may be a divergence between the =
tunnel and the underlying path. This is amplified when the underlying =
path it asymmetric, and even further when 3rd parties are introduced in =
one or both directions. This issue also exposes the disparity between =
what the applications are trying to do vs. what the native network is =
capable of. When your job is the underlying network, exposing this =
difference is going to cause grief, and in turn complaints about the =
thing that is shining light on the situation.

As to specific technologies:
Tunnel brokers
The modern equivalent of modem-pools. Requires explicit action by the =
end user, and some degree of technical skill to get configured correctly =
(more so than a dial modem which used the widely understood paradigm of =
placing a phone call). Also requires some degree of adaptability by the =
hub operator to deal with dynamics in the underlying network addressing =
over time. Primary application issue is reduced MTU.

isatap
designed as an intra-enterprise tool for hosts to reach across an =
infrastructure that is still being amortized. Use of an IPv4 DNS A =
record allows network managers to explicitly distribute clients across a =
set of tunnel termination routers, while the alternative use of an IPv4 =
anycast allows clients to reach the topologically nearest router. Can be =
used in conjunction with native external access, or 6to4/6rd by =
reencapsulating the packet. The triplet of a 6to4/6rd router, native =
IPv6 firewall, and isatap router allows IPv6 application support between =
the enterprise end system and the public Internet with the same firewall =
policies as IPv4, and without the need for deep packet inspection to =
parse the tunnel packets.=20


6to4
designed as a border router to tunnel to other 6to4 routers over public =
infrastructure that was not ready for native support. Relays to transit =
between the 6to4 addressed environment and the native environment were =
an add-on that merged the two. The IPv4 anycast address to further =
simplify deployment was another add-on which reduces configuration on =
the client side router. The problematic issue of 6to4 is the misguided =
one-liner in RFC 3056 5.2 bullet 3 that restricts advertisements into =
the IPv6 BGP mesh to be the entire 2002::/16. This leads to unwanted =
traffic patterns, so the relays are not deployed as widely as they =
could/should be to mitigate latency and asymmetry issues. If the cpe is =
capable of dealing with the encapsulation, end hosts are unable to =
distinguish this prefix from any native service that might have been =
offered to the cpe. When no adjacent router is offering native IPv6 =
service, an end system with a public IPv4 address might attempt to use =
this technology, and if encapsulated packets are blocked by a firewall =
it will fail, but without the firewall the applications will see what =
appears to be an IPv6 network.=20

6rd
designed as a derivative of 6to4 to explicitly remove the /16 =
restriction in IPv6 BGP advertisements because changing that one line =
would have taken longer to get agreement on than an entirely new design, =
implementation, and deployment sequence (note that this will result in =
just as many IPv6 prefix announcements into BGP as fragmenting 2002:: =
would have, so the arguments about modifying 3056 are moot). Requires =
that each provider have a large enough allocation to embed the uniquely =
identifying parts of their IPv4 space into each 6rd address block, which =
is a challenge with existing IPv6 allocation policies. It does align the =
tunnel endpoints with the access infrastructure that is being overlaid, =
at least at the organizational level. Does require CPE capable of =
sorting out which set of IPv4 address bits should be concatenated with =
the IPv6 pop prefix to create the ultimate IPv6 prefix for that CPE.=20


teredo/mirado
designed to deal with the reality that most end systems find themselves =
behind a nat and without an adjacent router offering IPv6 so the above =
approaches won't work. Has additional overhead compared with the above, =
further reducing the MTU, and requiring a lengthy call-setup sequence. =
Works well enough for bulk bit delivery when responsiveness expectations =
are low and nat traversal is the primary goal.


Tony



> -----Original Message-----
> From: Karl Auer [mailto:kauer@biplane.com.au]
> Sent: Thursday, May 12, 2011 10:53 PM
> To: NANOG List
> Subject: Interested in input on tunnels as an IPv6 transition
> technology
>=20
> Hullo all.
>=20
> I'm working on a talk, and would be interested to know what people
> think is good about tunnels as an IPv6 transition technology, and what
> people think is bad about tunnels.
>=20
> It would probably be best to let me know off-list :-) but I'm happy to
> summarise back to the list. Any references you have to useful papers,
> articles, discussions etc would also be appreciated.
>=20
> I'm looking for both general problems and advantages, but also
> advantages and disadvantages specific to particular tunnel types. It
> would also be helpful to know from what perspective particular things
> are good or bad, in so far as it isn't obvious. A carrier has a
> different perspective than, say, a home user, who will have a =
different
> perspective again to an enterprise user.
>=20
> Many thanks in advance for your input.
>=20
> Regards, K.
>=20
> PS: Note the all-important words "as an IPv6 transition technology"...
>=20
>=20
> --
> =
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> Karl Auer (kauer@biplane.com.au)                   +61-2-64957160 (h)
> http://www.biplane.com.au/kauer/                   +61-428-957160 =
(mob)
>=20
> GPG fingerprint: DA41 51B1 1481 16E1 F7E2 B2E9 3007 14ED 5736 F687 Old
> fingerprint: B386 7819 B227 2961 8301 C5A9 2EBC 754B CD97 0156



home help back first fref pref prev next nref lref last post