[181599] in North American Network Operators' Group
Re: CDNs for carriers
daemon@ATHENA.MIT.EDU (Jared Mauch)
Mon Jun 29 10:21:08 2015
X-Original-To: nanog@nanog.org
From: Jared Mauch <jared@puck.nether.net>
In-Reply-To: <1045686261.870.1435586395880.JavaMail.mhammett@ThunderFuck>
Date: Mon, 29 Jun 2015 10:21:05 -0400
To: Mike Hammett <nanog@ics-il.net>
Cc: nanog list <nanog@nanog.org>
Errors-To: nanog-bounces@nanog.org
> On Jun 29, 2015, at 9:59 AM, Mike Hammett <nanog@ics-il.net> wrote:
>=20
> Simple flows wouldn't necessarily tell you if you're pulling a bunch =
from a Netflix caching box on your upstream somewhere. You'd think you =
had a huge amount going to your current upstream because technically you =
do, but a local cache or peer could alter that significantly. As we've =
been starting up our IX, we're finding that we can send lists of ASNs =
and prefixes and the various CDNs will tell us how much traffic they see =
going to our customers. Combine that with what flows tell you and I =
think you've got a good approach.=20
>=20
> What are some good approaches to determining traffic levels to not =
only ASNs, but also that ASN's downstream ASNs? You may have ASNs A, B, =
C, D and E in your flows. Say none of them represent more than 5% of =
your traffic by themselves. If B, C, D and E all purchase transit from A =
and you can reasonably peer with A, you actually can move 25% of your =
traffic over to a peer. Maybe there is no good approach at doing that =
without a bunch of manual work or paying someone else to do it.=20
>=20
> Looking at some stats from one of our customers that is also going =
through Equinix Chicago, for their average inbound ~37% of traffic was =
Netflix, Google was 34% and the next highest was Apple at 5%. Note that =
Akamai had left Chicago Equinix by this point, so they wouldn't be =
reflected in those numbers. Those percentages are percent of all traffic =
they send to Equinix. I believe about 2/3s of their total transit went =
to Equinix when that got turned up. Their total traffic went up once =
joining the Equinix IX, presumably because they were now bypassing some =
congestion somewhere.=20
>=20
Sure. There are a lot of dynamics to consider. It=E2=80=99s fairly =
easy to look at TCP speeds and retransmissions to determine the link =
speed involved. I=E2=80=99ve seen many CDNs quickly identify congested =
or paths without congestion and engage in some adaptive behaviors.
This being said, there is not a single solution to everything. Chris =
mentioned using DNS, which is a nice method assuming you see all the =
queries within your traffic cone.
- Jared=