Reducing Usenet Bandwidth

Paul Vixie vixie at as.vix.com
Sun Feb 3 17:23:38 UTC 2002


steve at opaltelecom.co.uk ("Stephen J. Wilcox") writes:

>  as we all know Usenet traffic is always increasing, a large number of
> people take full feeds which on my servers is about 35Mb of continuous
> bandwidth in/out. That produces about 300Gb per day of which only a small
> fraction ever gets downloaded.
> 
> The question is, and apologies if I am behind the times, I'm not an expert
> on news... how is it possible to reduce bandwidth used occupied by news:

Pull it, rather than pushing it.  nntpcache is a localized example of how
to only transfer the groups and articles that somebody on your end of a
link actually wants to read.  A more systemic example ought to be developed
whereby every group has a well-mirrored home and an nntpcache hierarchy
similar to what Squid proposed for web data, and every news reader pulls
only what it needs.  Posting an article should mean getting it into the
well-mirrored home of that group.  Removing spam should mean deleting
articles from the well-mirrored home of that group.

Pushing netnews, with or without multicast, with or without binaries, is
just unthinkable at today's volumes but we do it anyway.  The effect of
increased volume have decreased the utilization of netnews as a media 
amongst my various friends.  Pushing netnews after another three or four
doublings is so far beyond the sane/insane boundary that I just know it
won't happen, Moore or not.  It's well and truly past time to pull it
rather than push it.



More information about the NANOG mailing list