[94396] in North American Network Operators' Group

home help back first fref pref prev next nref lref last post

Re: Network end users to pull down 2 gigabytes a day, continuously?

daemon@ATHENA.MIT.EDU (Joe Abley)
Sun Jan 21 11:56:40 2007

In-Reply-To: <a2b2d0480701210414m124f01ddq398a1d48a9ca5dca@mail.gmail.com>
Cc: "Stephen Sprunk" <stephen@sprunk.org>,
	"Dave Israel" <davei@otd.com>, "Bora Akyol" <bora@broadcom.com>,
	"North American Noise and Off-topic Gripes" <nanog@merit.edu>
From: Joe Abley <jabley@ca.afilias.info>
Date: Sun, 21 Jan 2007 11:54:44 -0500
To: Alexander Harrowell <a.harrowell@gmail.com>
Errors-To: owner-nanog@merit.edu



On 21-Jan-2007, at 07:14, Alexander Harrowell wrote:

> Regarding your first point, it's really surprising that existing  
> P2P applications don't include topology awareness. After all, the  
> underlying TCP already has mechanisms to perceive the relative  
> nearness of a network entity - counting hops or round-trip latency.  
> Imagine a BT-like client that searches for available torrents, and  
> records the round-trip time to each host it contacts. These it  
> places in a lookup table and picks the fastest responders to  
> initiate the data transfer. Those are likely to be the closest, if  
> not in distance then topologically, and the ones with the most  
> bandwidth. Further, imagine that it caches the search -  so when  
> you next seek a file, it checks for it first on the hosts nearest  
> to it in its "routing table", stepping down progressively if it's  
> not there. It's a form of local-pref.

Remember though that the dynamics of the system need to assume that  
individual clients will be selfish, and even though it might be in  
the interests of the network as a whole to choose local peers, if you  
can get faster *throughput* (not round-trip response) from a remote  
peer, it's a necessary assumption that the peer will do so.

Protocols need to be designed such that a client is rewarded in  
faster downloads for uploading in a fashion that best benefits the  
swarm.

> The third step is for content producers to directly add their torrents
> to the ISP peers before releasing the torrent directly to the public.
> This gets "official" content pre-positioned for efficient  
> distribution,
> making it perform better (from a user's perspective) than pirated
> content.

If there was a big fast server in every ISP with a monstrous pile of  
disk which retrieved torrents automatically from a selection of  
popular RSS feeds, which kept seeding torrents for as long as there  
was interest and/or disk, and which had some rate shaping installed  
on the host such that traffic that wasn't on-net (e.g. to/from  
customers) or free (e.g. to/from peers) was rate-crippled, how far  
would that go to emulating this behaviour with existing live  
torrents? Speaking from a technical perspective only, and ignoring  
the legal minefield.

If anybody has tried this, I'd be interested to hear whether on-net  
clients actually take advantage of the local monster seed, or whether  
they persist in pulling data from elsewhere.


Joe


home help back first fref pref prev next nref lref last post