Can P2P applications learn to play fair on networks?

Rich Groves rich at richgroves.com
Mon Oct 22 20:05:37 UTC 2007


I'm a bit late to this conversation but I wanted to throw out a few bits of 
info not covered.

A company called Oversi makes a very interesting solution for caching 
Torrent and some Kad based overlay networks as well all done through some 
cool strategically placed taps and prefetching. This way you could "cache 
out" at whatever rates you want and mark traffic how you wish as well. This 
does move a statistically significant amount of traffic off of the upstream 
and on a gigabit ethernet (or something) attached cache server solving large 
bits of the HFC problem. I am a fan of this method as it does not require a 
large foot print of inline devices rather a smaller footprint of statics 
gathering sniffers and caches distributed in places that make sense.

Also the people at Bittorrent Inc have a cache discovery protocol so that 
their clients have the ability to find cache servers with their hashes on 
them .

I am told these methods are in fact covered by the DMCA but remember I am no 
lawyer.


Feel free to reply direct if you want contacts


Rich


--------------------------------------------------
From: "Sean Donelan" <sean at donelan.com>
Sent: Sunday, October 21, 2007 12:24 AM
To: <nanog at merit.edu>
Subject: Can P2P applications learn to play fair on networks?

>
> Much of the same content is available through NNTP, HTTP and P2P. The 
> content part gets a lot of attention and outrage, but network engineers 
> seem to be responding to something else.
>
> If its not the content, why are network engineers at many university 
> networks, enterprise networks, public networks concerned about the impact 
> particular P2P protocols have on network operations?  If it was just a
> single network, maybe they are evil.  But when many different networks
> all start responding, then maybe something else is the problem.
>
> The traditional assumption is that all end hosts and applications 
> cooperate and fairly share network resources.  NNTP is usually considered 
> a very well-behaved network protocol.  Big bandwidth, but sharing network 
> resources.  HTTP is a little less behaved, but still roughly seems to 
> share network resources equally with other users. P2P applications seem
> to be extremely disruptive to other users of shared networks, and causes
> problems for other "polite" network applications.
>
> While it may seem trivial from an academic perspective to do some things,
> for network engineers the tools are much more limited.
>
> User/programmer/etc education doesn't seem to work well. Unless the 
> network enforces a behavor, the rules are often ignored. End users 
> generally can't change how their applications work today even if they 
> wanted too.
>
> Putting something in-line across a national/international backbone is 
> extremely difficult.  Besides network engineers don't like additional
> in-line devices, no matter how much the sales people claim its fail-safe.
>
> Sampling is easier than monitoring a full network feed.  Using netflow 
> sampling or even a SPAN port sampling is good enough to detect major 
> issues.  For the same reason, asymetric sampling is easier than requiring 
> symetric (or synchronized) sampling.  But it also means there will be
> a limit on the information available to make good and bad decisions.
>
> Out-of-band detection limits what controls network engineers can implement 
> on the traffic. USENET has a long history of generating third-party cancel 
> messages. IPS systems and even "passive" taps have long used third-party
> packets to respond to traffic. DNS servers been used to re-direct 
> subscribers to walled gardens. If applications responded to ICMP Source 
> Quench or other administrative network messages that may be better; but 
> they don't.
>
> 



More information about the NANOG mailing list