[Nanog] Lies, Damned Lies, and Statistics [Was: Re: ATT VP: Internet to hit capacity by 2010]

Petri Helenius petri at helenius.fi
Tue Apr 22 12:12:30 UTC 2008


michael.dillon at bt.com wrote:
> But there is another way. That is for software developers to build a
> modified client that depends on a topology guru for information on the
> network topology. This topology guru would be some software that is run
>   
While the current bittorrent implementation is suboptimal for large 
swarms (where number of adjacent peers is significantly less than the 
number of total participants) I fail to figure out the necessary 
mathematics where topology information would bring superior results 
compared to the usual greedy algorithms where data is requested from the 
peers where it seems to be flowing at the best rates. If local peers 
with sufficient upstream bandwidth exist, majority of the data blocks 
are already retrieved from them.

In many locales ISP's tend to limit the available upstream on their 
consumer connections, usually causing more distant bits to be delivered 
instead.

I think the most important metric to study is the number of times the 
same piece of data is transmitted in a defined time period and try to 
figure out how to optimize for that. For a new episode of BSG, there are 
a few hundred thousand copies in the first hour and a million or so in 
the first few days. With the headers and overhead, we might already be 
hitting a petabyte per episode. RSS feeds seem to shorten the 
distribution ramp-up from release.

The p2p world needs more high-upstream "proxies" to make it more 
effective. I think locality with current torrent implementations would 
happen automatically. However there are quite a few parties who are 
happy to have it as bad as they can make it :-)

Is there a problem that needs to be solved that is not solved by 
Akamai's of the world already?

Pete





More information about the NANOG mailing list