[Nanog] Lies, Damned Lies, and Statistics [Was: Re: ATT VP: Internet to hit capacity by 2010]

michael.dillon at bt.com michael.dillon at bt.com
Tue Apr 22 05:55:58 CDT 2008


 
> Time to push multicast as transport for bittorrent? 

Bittorrent clients are already multicast, only they do it in a crude way
that does not match network topology as well as it could. Moving to use
IP multicast raises a whole host of technical issues such as lack of
multicast peering. Solving those technical issues requires ISP
cooperation, i.e. to support global multicast.

But there is another way. That is for software developers to build a
modified client that depends on a topology guru for information on the
network topology. This topology guru would be some software that is run
by an ISP, and which communicates with all other topology gurus in
neighboring ASes These gurus learn the topology using some kind of
protocol like a routing protocol. They also have some local intelligence
configured by the ISP such as allowed traffic rates at certain time
periods over certain paths. And they share all of that information in
order to optimize the overall downloading of all files to all clients
which share the same guru. Some ISPs have local DSL architectures in
which it makes better sense to download a file from a remote location,
than from the guy next door. In that case, an ISP could configure a guru
to prefer circuits into their data centre, then operate clients in the
data center that effectively cache files. But the caching thing is
optional.

Then, a bittorrent client doesn't have to guess how to get files
quickly, it just has to follow the guru's instructions. Part of this
would involve cooperating with all other clients attached to the same
guru so that no client downloads distant blocks of data that have
already been downloaded by another local client. This is the part that
really starts to look like IP multicast except that it doesn't rely on
all clients functioning in real time. Also, it looks like NNTP news
servers except that the caching is all done on the clients. The gurus
never cache or download files.

For this to work, you need to start by getting several ISPs to buy-in,
help with the design work, and then deploy the gurus. Once this proves
itself in terms of managing how and *WHEN* bandwidth is used, it should
catch on quite quickly with ISPs. Note that a key part of this
architecture is that it allows the ISP to open up the throttle on
downloads during off-peak hours so that most end users can get a
predictable service of all downloads completed overnight.

--Michael Dillon




More information about the NANOG mailing list