Google wants to be your Internet

Stephen Sprunk stephen at sprunk.org
Sun Jan 21 04:32:28 UTC 2007


Thus spake "Adrian Chadd" <adrian at creative.net.au>
> On Sun, Jan 21, 2007, Charlie Allom wrote:
>> > This is a pure example of a problem from the operational front 
>> > which
>> > can be floated to research and the industry, with smarter solutions
>> > than port blocking and QoS.
>>
>> This is what I am interested/scared by.
>
> Its not that hard a problem to get on top of. Caching, unfortunately,
> continues to be viewed as anaethma by ISP network operators in the
> US. Strangely enough the caching technologies aren't a problem with
> the content -delivery- people.

US ISPs get paid on bits sent, so they're going to be _against_ caching 
because caching reduces revenue.  Content providers, OTOH, pay the ISPs 
for bits sent, so they're going to be _for_ caching because it increases 
profits.  The resulting stalemate isn't hard to predict.

> I've had a few ISPs out here in Australia indicate interest in a cache
> that could do the normal stuff (http, rtsp, wma) and some of the p2p
> stuff (bittorrent especially) with a smattering of 
> QoS/shaping/control -
> but not cost upwards of USD$100,000 a box. Lots of interest, no
> commitment.

Basically, they're looking for a box that delivers what P2P networks 
inherently do by default.  If the rate-limiting is sane, then only a 
copy (or two) will need to come in over the slow overseas pipes, and 
it'll be replicated and reassembled locally over fast pipes.  What, 
exactly, is a middlebox supposed to add to this picture?

> It doesn't help (at least in Australia) where the wholesale model of
> ADSL isn't content-replication-friendly: we have to buy ATM or
> ethernet pipes to upstreams and then receive each session via L2TP.
> Fine from an aggregation point of view, but missing the true usefuless
> of content replication and caching - right at the point where your
> customers connect in.

So what you have is a Layer 8 problem due to not letting the network 
topology match the physical topology.  No magical box is going to save 
you from hairpinning traffic between a thousand different L2TP pipes. 
The best you can hope for is that the rate limits for those L2TP pipes 
will be orders of magnitude larger than the rate limit for them to talk 
upstream -- and you don't need any new tools to do that, just 
intelligent use of what you already have.

> (Disclaimer: I'm one of the Squid developers. I'm getting an 
> increasing
> amount of interest from CDN/content origination players but none from
> ISPs. I'd love to know why ISPs don't view caching as a viable option
> in today's world and what we could to do make it easier for y'all.)

As someone who voluntarily used a proxy and gave up, and has worked in 
an IT dept that did the same thing, it's pretty easy to explain: there 
are too many sites that aren't cache-friendly.  It's easy for content 
folks to put up their own caches (or Akamaize) because they can design 
their sites to account for it, but an ISP runs too much risk of breaking 
users' experiences when they apply caching indiscriminately to the 
entire Web.  Non-idempotent GET requests are the single biggest breakage 
I ran into, and the proliferation of dynamically-generated "Web 2.0" 
pages (or faulty Expires values) are the biggest factor that wastes 
bandwidth by preventing caching.

S

Stephen Sprunk         "God does not play dice."  --Albert Einstein
CCIE #3723         "God is an inveterate gambler, and He throws the
K5SSS        dice at every possible opportunity." --Stephen Hawking 




More information about the NANOG mailing list