Network end users to pull down 2 gigabytes a day, continuously?

Patrick W. Gilmore patrick at ianai.net
Sun Jan 7 15:54:03 UTC 2007


On Jan 7, 2007, at 3:17 PM, Brandon Butterworth wrote:

>> The real problem with P2P networks is that they don't
>> generally make download decisions based on network
>> architecture.
>
> Indeed, that's what I said. Until then ISPs can only fix it with P2P
> aware caches, if the protocols did it then they wouldn't need the
> caches though P2P efficiency may go down
>
> It'll be interesting to see how Akamai & co. counter this trend. At  
> the
> moment they can say it's better to use a local Akamai cluster than  
> have
> P2P taking content from anywhere on the planet. Once it's mostly local
> traffic then it's pretty much equivalent to Akamai. It's still moving
> routing/TE up the stack though so will affect the ISPs network ops.

ISPs don't pay Akamai, content owners do.

Content owners are usually not concerned with the same things an  
ISP's "newtork ops" are.  (I'm not saying that's a good thing, I'm  
just saying that is reality.  Life might be much better all around if  
the two groups interacted more.  Although one could say that Akamai  
fills that gap as well. :)

Anyway, a content provider is going to do what's best for their  
content, not what's best for the ISP.  It's a difficult argument to  
make to a content provider that putting their content on millions of  
end user HDs depending on grandma to provide good quality streaming  
to Joe Smith down the street.  At least in my experience.

-- 
TTFN,
patrick




More information about the NANOG mailing list