[94043] in North American Network Operators' Group

home help back first fref pref prev next nref lref last post

Re: Network end users to pull down 2 gigabytes a day, continuously?

daemon@ATHENA.MIT.EDU (Patrick W. Gilmore)
Sun Jan 7 11:13:31 2007

In-Reply-To: <200701071517.PAA10001@sunf10.rd.bbc.co.uk>
Cc: "Patrick W. Gilmore" <patrick@ianai.net>
From: "Patrick W. Gilmore" <patrick@ianai.net>
Date: Sun, 7 Jan 2007 10:54:03 -0500
To: nanog@merit.edu
Errors-To: owner-nanog@merit.edu


On Jan 7, 2007, at 3:17 PM, Brandon Butterworth wrote:

>> The real problem with P2P networks is that they don't
>> generally make download decisions based on network
>> architecture.
>
> Indeed, that's what I said. Until then ISPs can only fix it with P2P
> aware caches, if the protocols did it then they wouldn't need the
> caches though P2P efficiency may go down
>
> It'll be interesting to see how Akamai & co. counter this trend. At  
> the
> moment they can say it's better to use a local Akamai cluster than  
> have
> P2P taking content from anywhere on the planet. Once it's mostly local
> traffic then it's pretty much equivalent to Akamai. It's still moving
> routing/TE up the stack though so will affect the ISPs network ops.

ISPs don't pay Akamai, content owners do.

Content owners are usually not concerned with the same things an  
ISP's "newtork ops" are.  (I'm not saying that's a good thing, I'm  
just saying that is reality.  Life might be much better all around if  
the two groups interacted more.  Although one could say that Akamai  
fills that gap as well. :)

Anyway, a content provider is going to do what's best for their  
content, not what's best for the ISP.  It's a difficult argument to  
make to a content provider that putting their content on millions of  
end user HDs depending on grandma to provide good quality streaming  
to Joe Smith down the street.  At least in my experience.

-- 
TTFN,
patrick


home help back first fref pref prev next nref lref last post