free collaborative tools for low BW and losy connections

Joe Greco jgreco at ns.sol.net
Sun Mar 29 20:46:37 UTC 2020


On Sun, Mar 29, 2020 at 07:46:28PM +0100, Nick Hilliard wrote:
> Joe Greco wrote on 29/03/2020 15:56:
> >On Sun, Mar 29, 2020 at 03:01:04PM +0100, Nick Hilliard wrote:
> >>because it uses flooding and can't guarantee reliable message
> >>distribution, particularly at higher traffic levels.
> >
> >That's so hideously wrong.  It's like claiming web forums don't
> >work because IP packet delivery isn't reliable.
> 
> Really, it's nothing like that.

Sure it is.  At a certain point you can get web forums to stop working
by DDoS.  You can't guarantee reliable interaction with a web site if
that happens.

> >Usenet message delivery at higher levels works just fine, except that
> >on the public backbone, it is generally implemented as "best effort"
> >rather than a concerted effort to deliver reliably.
> 
> If you can explain the bit of the protocol that guarantees that all 
> nodes have received all postings, then let's discuss it.

There isn't, just like there isn't a bit of the protocol that guarantees
that an IP packet is received by its intended recipient.  No magic.

It's perfectly possible to make sure that you are not backlogging to a
peer and to contact them to remediate if there is a problem.  When done 
at scale, this does actually work.  And unlike IP packet delivery, news
will happily backlog and recover from a server being down or whatever.

> >The concept of flooding isn't problematic by itself.
> 
> Flood often works fine until you attempt to scale it.  Then it breaks, 
> just like Bj??rn admitted. Flooding is inherently problematic at scale.

For... what, exactly?  General Usenet?  Perhaps, but mainly because you
do not have a mutual agreement on traffic levels and a bunch of other
factors.  Flooding works just fine within private hierarchies, and since
I thought this was a discussion of "free collaborative tools" rather than
"random newbie trying to masochistically keep up with a full backbone 
Usenet feed", it definitely should work fine for a private hierarchy and
collaborative use.

> > If you wanted to
> >implement a collaborative system, you could easily run a private
> >hierarchy and run a separate feed for it, which you could then monitor
> >for backlogs or issues.  You do not need to dump your local traffic on
> >the public Usenet.  This can happily coexist alongside public traffic
> >on your server.  It is easy to make it 100% reliable if that is a goal.
> 
> For sure, you can operate mostly reliable self-contained systems with 
> limited distribution.  We're all in agreement about this.

Okay, good. 

> >>The fact that it ended up having to implement TAKETHIS is only one
> >>indication of what a truly awful protocol it is.
> >
> >No, the fact that it ended up having to implement TAKETHIS is a nod to
> >the problem of RTT.
> 
> TAKETHIS was necessary to keep things running because of the dual 
> problem of RTT and lack of pipelining.  Taken together, these two 
> problems made it impossible to optimise incoming feeds, because of ... 
> well, flooding, which meant that even if you attempted an IHAVE, by the 
> time you delivered the article, some other feed might already have 
> delivered it.  TAKETHIS managed to sweep these problems under the 
> carpet, but it's a horrible, awful protocol hack.

It's basically cheap pipelining.  If you want to call pipelining in
general a horrible, awful protocol hack, then that's probably got
some validity.

> >It did and has.  The large scale binaries sites are still doing a
> >great job of propagating binaries with very close to 100% reliability.
> 
> which is mostly because there are so few large binary sites these days, 
> i.e. limited distribution model.

No, there are so few large binary sites these days because of consolidation
and buyouts.

> >I was there.
> 
> So was I, and probably so were lots of other people on nanog-l.  We all 
> played our part trying to keep the thing hanging together.

I'd say most of the folks here were out of this fifteen to twenty years
ago, well before the explosion of binaries in the early 2000's.

> >I'm the maintainer of Diablo.  It's fair to say I had a
> >large influence on this issue as it was Diablo's distributed backend
> >capability that really instigated retention competition, and a number
> >of optimizations that I made helped make it practical.
> 
> Diablo was great - I used it for years after INN-related head-bleeding. 
> Afterwards, Typhoon improved things even more.
> 
> >The problem for smaller sites is simply the immense traffic volume.
> >If you want to carry binaries, you need double digits Gbps.  If you
> >filter them out, the load is actually quite trivial.
> 
> Right, so you've put your finger on the other major problem relating to 
> flooding which isn't the distribution synchronisation / optimisation 
> problem: all sites get all posts for all groups which they're configured 
> for.  This is a profound waste of resources + it doesn't scale in any 
> meaningful way.

So if you don't like that everyone gets everything they are configured to
get, you are suggesting that they... what, exactly?  Shouldn't get everything
they want?

None of this changes that it's a robust, mature protocol that was originally
designed for handling non-binaries and is actually pretty good in that role.
Having the content delivered to each site means that there is no dependence
on long-distance interactive IP connections and that each participating site
can keep the content for however long they deem useful.  Usenet keeps hummin'
along under conditions that would break more modern things like web forums.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
"The strain of anti-intellectualism has been a constant thread winding its way
through our political and cultural life, nurtured by the false notion that
democracy means that 'my ignorance is just as good as your knowledge.'"-Asimov



More information about the NANOG mailing list