ISPs slowing P2P traffic...

Joe St Sauver joe at oregon.uoregon.edu
Wed Jan 9 21:00:41 UTC 2008


Deepak mentioned:

#However, my question is simply.. for ISPs promising broadband service. 
#Isn't it simpler to just announce a bandwidth quota/cap that your "good" 
#users won't hit and your bad ones will? 

Quotas may not always control the behavior of concern. 

As a hypothetical example, assume customers get 10 gigabytes worth of
traffic per month. That traffic could be more-or-less uniformly 
distributed across all thirty days, but it is more likely that there
will be some heavy usage days and light usage days, and some busy times
and some slow times. Shaping or rate limiting traffic will shave the 
peak load during high demand days (which is almost always the real issue),
while quota-based systems typically will not. 

Quota systems can also lead to weird usage artifacts. For example,
assume that users can track how much of their quota they've used --
as you get to the end of each period, people may be faced with 
"use it or lose it" situations, leading to end-of-period spikes in
usage. 

Quotas (at least in higher education contexts) can also lead to 
things like account sharing ("Hey, I'm out of 'credits' for this
month -- you never use yours, so can I login using your account?"
"Sure..." -- even if acceptable use policies prohibit that sort of 
thing)

And then what do you do with users who reach their quota? Slow them
down? Charge them more? Turn them off? All of those options are
possible, but each comes with what can be its own hellish pain. 

And finally, manipulating all types of total traffic could also 
be bad if customers have a third party VoIP service running, and 
you block/throttle/other wise mess with untouchable voice service 
traffic when they need to make a 911 call or whatever. 

#Operationally, why not just lash a few additional 10GE cross-connects 
#and let these *paying customers* communicate as they will?

I think the bottleneck is usually closer to the edge...

Part of the issue is that consumer connections are often priced 
predicated on a relatively light usage model, and an assumption
that much of that traffic may may be amenable to "tricks" (such as 
passive caching, or content served from local Akamai stacks, etc.
-- although this is certainly less of an issue than it once was). 

Replace that model with one where consumers actually USE the entire 
connection they've purchased, rather than just some small statistically 
multiplexed fraction thereof, and make all traffic encrypted/opaque 
(and thus unavailable for potential "optimized delivery") and the 
default pricing model can break. 

You then have a choice to make:

-- cover those increased costs (all associated with a relatively small 
   number of users living in the tail of the consumption distribution) 
   by increasing the price of the service for everyone (hard in a highly 
   competitive market), or 

-- deal with just that comparative handful of users who don't fit the 
   presumptive model (shape their traffic, encourage them to buy from 
   your competitor, decline to renew their contract, whatever). 

The later is probably easier than the former. 

#I don't see how Operators could possibly debug connection/throughput 
#problems when increasingly draconian methods are used to manage traffic 
#flows with seemingly random behaviors. This seems a lot like the 
#evil-transparent caching we were concerned about years ago.

Middleboxes can indeed make things a mess, but at least in some 
environments (e.g., higher ed residential networks), they've become 
pretty routine. Network transparency should be the goal, but 
operational transparency (e.g., telling people what you're doing to
their traffic) may be an acceptable alternative in some circumstances.

#What can be done operationally?

Tiered service is probably the cleanest option: cheap "normal" service
with shaping and other middlebox gunk for price sensitive populations 
with modest needs, and premium clear pipe service where the price
reflects the assumption that 100% of the capacity provisioned will be
used. Sort of like what many folks already do by offering "residential"
and "commercial" grade service options, I guess...

#For legitimate applications:
#
#Encouraging "encryption" of more protocols is an interesting way to 
#discourage this kind of shaping.

Except encryption isn't enough. Even if I can't see the contents of 
packets, I can still do traffic analysis on the ASNs or FQDNs or IPs 
involved, the rate and number of packets transfered, the number of 
concurrent open sessions open to a given address of interest, etc. 
Encrypted P2P traffic over port 443 doesn't look the same as encrypted 
normal web traffic. :-)

Unless you have an encrypted pipe that's *always* up and always *full*
(padding lulls in real traffic with random filler sonets or whatever), 
and that connection is only exchanging traffic with one and only one 
remote destination, traffic analysis will almost always yield 
interesting insights, even if the body of the traffic is inaccessible. 

#Using IPv6 based IPs instead of ports would also help by obfuscating 
#protocol and behavior. Even IP rotation through /64s (cough 1 IP per 
#half-connection anyone).

Some traffic also stands out simply because only "interesting people" 
exhibit the behavior in question. :-) That could be port hopping, or 
nailing up a constantly full encrypted connection that only talks 
to one other host. :-)

#My caffeine hasn't hit, so I can't think of anything else. Is this 
#something the market will address by itself?

I think so. At some point there's sufficient capacity everywhere, edge
and core, that (a) there's no pressing operational need to shape 
traffic, and (b) the shaping devices available for the high capacity
circuits are prohibitively expensive. That's part of the discussion
I offered in "Capacity Planning and System and Network Security," a
talk I did for the April '07 Internet2 Member Meeting, see 
http://www.uoregon.edu/~joe/i2-cap-plan/internet2-capacity-planning.ppt
(or .pdf) at slides 44-45 or so. 

I'd also note that if end user hosts are comparatively clean and under 
control, traffic from a few "outlier" users is a lot easier to absorb than 
if you're infested with zombied boxes. In some cases, those bumps in the 
wire may not be targeting P2P traffic, but rather artifacts associated 
with botted hosts which are running excessively hot. 

Regards,

Joe St Sauver (joe at oregon.uoregon.edu)

Disclaimer: all opinions strictly my own. 



More information about the NANOG mailing list