[45770] in North American Network Operators' Group
Re: Reducing Usenet Bandwidth
daemon@ATHENA.MIT.EDU (Eliot Lear)
Sun Feb 17 12:31:44 2002
Message-Id: <5.1.0.14.2.20020217092634.02ae3dd8@lint.cisco.com>
Date: Sun, 17 Feb 2002 09:30:03 -0800
To: Iljitsch van Beijnum <iljitsch@muada.com>
From: Eliot Lear <lear@cisco.com>
Cc: Stephen Stuart <stuart@tech.org>,
David Schwartz <davids@webmaster.com>, <nanog@merit.edu>
In-Reply-To: <20020217133854.W64519-100000@sequoia.muada.com>
Mime-Version: 1.0
Content-Type: text/plain; charset="us-ascii"; format=flowed
Errors-To: owner-nanog-outgoing@merit.edu
This is the art of content delivery and caching. And the nice thing is
that depending on which technology you use the person who wants the
material closer to the end user pays. If that's the end user, then use a
cache with WCCP. If that's the content owner, use a cache with either an
HTTP redirect or (Paul, forgive me) a DNS hack, either of which can be tied
to the routing system. In either case there is, perhaps, a more explicit
economic model than netnews. It's not to say there *isn't* an economic
model with netnews. It's just that it doesn't make as much sense as it did
(see smb's earlier comment).
Eliot
At 01:44 PM 2/17/2002 +0100, Iljitsch van Beijnum wrote:
>On Fri, 8 Feb 2002, Stephen Stuart wrote:
>
> > The topic being discussed is to try to reduce USENET bandwidth. One
> > way to do that is to pass pointers around instead of complete
> > articles. If the USENET distribution system passed pointers to
> > articles around instead of the actual articles themselves, sites could
> > then "self-tune" their spools to the content that their readers (the
> > "users") found interesting (fetch articles from a source that offered
> > to actually spool them), either by pre-fetching or fetching on-demand,
> > but still have access to the "total accumulated wisdom" of USENET -
> > and maybe it wouldn't need to be reposted every week, because sites
> > could also offer value to their "users" on the publishing side by
> > offering to publish their content longer.
>
>I'm a bit behind on reading the NANOG list, so excuse the late reply.
>
>If we can really build such a beast, this would be extremely cool. The
>method of choice for publishing free information on the Net is WWW these
>days. But it doesn't work very well, since there is no direct relationship
>between an URL and the published text or file. So people will use a "far
>away" URL because they don't know the same file can be found much closer,
>and URLs tend to break after a while.
>
>I've thought about this for quite a while, and even written down a good
>deal of them. If you're interested:
>
>http://www.muada.com/projects/usenet.txt
>
>Iljitsch van Beijnum