[45488] in North American Network Operators' Group
Re: Reducing Usenet Bandwidth
daemon@ATHENA.MIT.EDU (Simon Lyall)
Sat Feb 2 20:00:11 2002
Date: Sun, 3 Feb 2002 13:59:32 +1300 (NZDT)
From: Simon Lyall <simon.lyall@ihug.co.nz>
To: <nanog@merit.edu>
In-Reply-To: <20020203002857.GB7487@wonderland.linux.it>
Message-ID: <Pine.LNX.4.30.0202031344290.26969-100000@boggle.ihug.co.nz>
MIME-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII
Errors-To: owner-nanog-outgoing@merit.edu
On Sun, 3 Feb 2002, Marco d'Itri wrote:
> The major drawback of that protocol is that it limits articles size to
> 64KB, so it does not reduce binary traffic, which is the largest part
> of a newsfeed.
In which case you just modify the protocol (or roll your own) to have
articles spread across multiple packets. It's not that hard and our guys
did this and I expect it's been done (at least) half a dozen times by
other people.
Anyway multicasting thing doesn't solve the problem really, you still
have to transport 30Mb/s (plus) from outside you network to inside it, put
it onto a (relativly expensive) server and give it to the customers.
The economics of the whole exercise are very interesting once you get past
disccussion ( worth doing for anybody ) and picture groups (worth
doing for all but the smallest ISP). Comitting to a good supply for
multi-part binary groups probally should involve sitting down with one
of the company accountants (if you only have one accountant then your
company is probally to small).
--
Simon Lyall. | Newsmaster | Work: simon.lyall@ihug.co.nz
Senior Network/System Admin | Postmaster | Home: simon@darkmere.gen.nz
ihug, Auckland, NZ | Asst Doorman | Web: http://www.darkmere.gen.nz