BitTorrent swarms have a deadly bite on broadband nets

Frank Bulk frnkblk at iname.com
Mon Oct 22 22:02:11 UTC 2007


I wonder how quickly applications and network gear would implement QoS
support if the major ISPs offered their subscribers two queues: a default
queue, which handled regular internet traffic but squashed P2P, and then a
separate queue that allowed P2P to flow uninhibited for an extra $5/month,
but then ISPs could purchase cheaper bandwidth for that.

But perhaps at the end of the day Andrew O. is right and it's best off to
have a single queue and throw more bandwidth at the problem.

Frank

-----Original Message-----
From: owner-nanog at merit.edu [mailto:owner-nanog at merit.edu] On Behalf Of Joel
Jaeggli
Sent: Sunday, October 21, 2007 9:31 PM
To: Steven M. Bellovin
Cc: Sean Donelan; nanog at merit.edu
Subject: Re: BitTorrent swarms have a deadly bite on broadband nets


Steven M. Bellovin wrote:

> This result is unsurprising and not controversial.  TCP achieves
> fairness *among flows* because virtually all clients back off in
> response to packet drops.  BitTorrent, though, uses many flows per
> request; furthermore, since its flows are much longer-lived than web or
> email, the latter never achieve their full speed even on a per-flow
> basis, given TCP's slow-start.  The result is fair sharing among
> BitTorrent flows, which can only achieve fairness even among BitTorrent
> users if they all use the same number of flows per request and have an
> even distribution of content that is being uploaded.
>
> It's always good to measure, but the result here is quite intuitive.
> It also supports the notion that some form of traffic engineering is
> necessary.  The particular point at issue in the current Comcast
> situation is not that they do traffic engineering but how they do it.
>

Dare I say it, it might be somewhat informative to engage in a priority
queuing exercise like the Internet-2 scavenger service.

In one priority queue goes all the normal traffic and it's allowed to
use up to 100% of link capacity, in the other queue goes the traffic
you'd like to deliver at lower priority, which given an oversubscribed
shared resource on the edge is capped at some percentage of link
capacity beyond which performance begins to noticably suffer... when the
link is under-utilized low priority traffic can use a significant chunk
of it. When high-priority traffic is present it will crowd out the low
priority stuff before the link saturates. Now obviously if high priority
traffic fills up the link then you have a provisioning issue.

I2 characterized this as worst effort service. apps and users could
probably be convinced to set dscp bits themselves in exchange for better
performance of interactive apps and control traffic vs worst effort
services data transfer.

Obviously there's room for a discussion of net-neutrality in here
someplace. However the closer you do this to the cmts the more likely it
is to apply some locally relevant model of fairness.

>               --Steve Bellovin, http://www.cs.columbia.edu/~smb
>





More information about the NANOG mailing list