Google wants to be your Internet

Mark Smith nanog at
Sun Jan 21 02:14:22 UTC 2007

On Sun, 21 Jan 2007 08:33:26 +0800
Adrian Chadd <adrian at> wrote:

> On Sun, Jan 21, 2007, Charlie Allom wrote:
> > > This is a pure example of a problem from the operational front which can
> > > be floated to research and the industry, with smarter solutions than port
> > > blocking and QoS.
> > 
> > This is what I am interested/scared by.
> Its not that hard a problem to get on top of. Caching, unfortunately, continues
> to be viewed as anaethma by ISP network operators in the US. Strangely enough
> the caching technologies aren't a problem with the content -delivery- people.
> I've had a few ISPs out here in Australia indicate interest in a cache that
> could do the normal stuff (http, rtsp, wma) and some of the p2p stuff (bittorrent
> especially) with a smattering of QoS/shaping/control - but not cost upwards of
> USD$100,000 a box. Lots of interest, no commitment.

I think it is probably because to build caching infrastructure that is
high performance and has enough high availability to make a difference is
either non-trivial or non-cheap. If it comes down to introducing
something new (new software / hardware, new concepts, new
complexity, new support skills, another thing that can break etc.)
verses just growing something you already have, already manage and
have since day one as an ISP - additional routers and/or higher capacity
links - then growing the network wins when the $ amount is the same
because it is simpler and easier.

> It doesn't help (at least in Australia) where the wholesale model of ADSL isn't
> content-replication-friendly: we have to buy ATM or ethernet pipes to upstreams
> and then receive each session via L2TP. Fine from an aggregation point of view,
> but missing the true usefuless of content replication and caching - right at
> the point where your customers connect in.

I think if even "pure" networking people (i.e. those that just focus on
shifting IP packets around) are accepting of that situation, when they
also believe in keeping traffic local, indicates that it is probably
more of an economic rather than a technical reason why that is still
happening. Inter-ISP peering at the exchange (C.O) would be the ideal,
however it seems that there isn't enough inter-customer (per-ISP or
between ISP) bandwidth consumption at each exchange to justify the
additional financial and complexity costs to do it.

Inter-customer traffic forwarding is usually happening at the next
level up in the hierarchy - at the regional / city level, which is
probably at this time the most economic level to do it.

> (Disclaimer: I'm one of the Squid developers. I'm getting an increasing amount
> of interest from CDN/content origination players but none from ISPs. I'd love
> to know why ISPs don't view caching as a viable option in today's world and
> what we could to do make it easier for y'all.)

Maybe that really means your customers (i.e. people who most benefit
from your software) are really the content distributors not ISPs
anymore. While the distinction might seem somewhat minor, I think ISPs
generally tend to have more of a view point of "where is this traffic
wanting or probably going to go, and how to do we build infrastructure
to get it there", and less of a "what is this traffic" view. In other
words, ISPs tend to be more focused on trying to optimise for all types
of traffic rather than one or a select few particular types, because
what the customer does with the bandwidth they purchase is up to
the customer themselves. If you spend time optimising for one type of
traffic you're either neglecting or negatively impacting another type.
Spending time on general optimisations that benefit all types of
traffic is usually the better way to spend time. I think one of the
reasons for ISP interest in the "p2p problem" could be because it is
reducing the normal benefit-to-cost ratio of general traffic
optimsation. Restoring the regular benefit-to-cost ratio of general
traffic optimsation is probably the fundamental goal of solving the
"p2p problem".

My suggestion to you as a squid developer would be focus on caching, or
more generally, localising of P2P traffic. It doesn't seem that the P2P
application developers are doing it, maybe because they don't care
because it doesn't directly impact them, or maybe because they don't
know how to. If squid could provide a traffic localising solution which
is just another traffic sink or source (e.g. a server) to an ISP,
rather than something that requires enabling knobs on the network
infrastructure for special handling or requires special traffic
engineering for it to work, I'd think you'd get quite a bit of

Just my 2c.



        "Sheep are slow and tasty, and therefore must remain constantly
                                   - Bruce Schneier, "Beyond Fear"

More information about the NANOG mailing list