[Nanog] ATT VP: Internet to hit capacity by 2010

michael.dillon at bt.com michael.dillon at bt.com
Wed Apr 23 09:39:33 UTC 2008


> > If the content senders do not want this dipping and levelling 
> > off, then they will have to foot the bill for the network capacity.
> 
> That's kind of the funniest thing I've seen today, it sounds 
> so much like an Ed Whitacre.  

> Then Ed learns that 
> the people he'd like to charge for the privilege of using 
> "his" pipes are already paying for pipes.

If they really were paying for pipes, there would be no issue.
The reason there is an issue is because network operators have
been assuming that consumers, and content senders, would not use
100% of the access link capacity through the ISP's core network.
When you assume any kind of overbooking then you are taking the 
risk that you have underpriced the service. The ideas people are
talking about, relating to pumping lots of video to every end user,
are fundamentally at odds with this overbooking model. The risk
level has change from one in 10,000 to one in ten or one in five.

> > But today, content production is cheap, and competition has 
> driven the 
> > cost of content down to zero.
> 
> Right, that's a "problem" I'm seeing too.

Unfortunately, the content owners still think that content is 
king and that they are sitting on a gold mine. They fail to see
that they are only raking in revenues because they spend an awful
lot of money on marketing their content. And the market is now
so diverse (YouTube, indie bands, immigrant communities) that
nobody can get anywhere close to 100% share. The long tail seems
to be getting a bigger share of the overall market.

> Host the video on your TiVo, or your PC, and take advantage 
> of your existing bandwidth.  (There are obvious non- 
> self-hosted models already available, I'm not focusing on 
> them, but they would work too)

Not a bad idea if the asymmetry in ADSL is not too small. But
this all goes away if we really do get the kind of distributed 
data centers that I envision, where most business premises convert
their machine rooms into generic compute/storage arrays.
I should point out that the enterprise world is moving this way,
not just Google/Amazon/Yahoo. For instance, many companies are moving
applications onto virtual machines that are hosted on relatively
generic compute arrays, with storage all in SANs. VMWare has a big
chunk of this market but XEN based solutions with their ability to
migrate running virtual machines, are also in use. And since a lot
of enterprise software is built with Java, clustering software like
Terracotta makes it possible to build a compute array with several
JVM's per core and scale applications with a lot less fuss than
traditional cluster operating systems. 

Since most ISPs are now owned by telcos and since most telcos have 
lots of strategically located buildings with empty space caused by
physical shrinkage of switching equipment, you would think that 
everybody on this list would be thinking about how to integrate all
these data center pods into their networks.

> So what I'm thinking of is a device that is doing the 
> equivalent of being a "personal video assistant" on the 
> Internet.  And I believe it is coming.  Something that's 
> capable of searching out and speculatively downloading the 
> things it thinks you might be interested in.  Not some 
> techie's cobbled together PC with BitTorrent and HDMI 
> outputs. 

Speculative downloading is the key here, and I believe that
cobbled together boxes will end up doing the same thing.
However, this means that any given content file will be
going to a much larger number of endpoints, which is something
that P2P handles quite well. P2P software is a form of multicast
as is a CDN (Content Delivery Network) like Akamai. Just because
IP Multicast is built into the routers, does not make it the
best way to multicast content. Given that widespread IP multicast
will *NOT* happen without ISP investment and that it potentially
impacts every router in the network, I think it has a disadvantage
compared with P2P or systems which rely on a few middleboxes
strategically places, such as caching proxies.

> The hardware specifics of this is getting a bit off-topic, at 
> least for this list.  Do we agree that there's a potential 
> model in the future where video may be speculatively fetched 
> off the Internet and then stored for possible viewing, and if 
> so, can we refocus a bit on that?

I can only see this speculative fetching if it is properly implemented
to minimize its impact on the network. The idea of millions of unicast
streams or FTP downloads in one big exaflood, will kill speculative
fetching. If the content senders create an exaflood, then the audience
will not get the kind of experience that they expect, and will go
elsewhere.

We had this experience recently in the UK when they opened a new
terminal
at Heathrow airport and British-Airways moved operations to T5
overnight.
The exaflood of luggage was too much for the system, and it has taken
weeks
to get to a level of service that people still consider "bad service"
but
bearable. They had so much misplaced luggage that they sent many
truckloads
of it to Italy to be sorted and returned to the owners. One of my
colleagues
claims that the only reason the terminal is now half-way functional is
that
many travellers are afraid to take any luggage at all except for
carry-on.
So far two executives of the airline have been sacked and the government

is being lobbied to break the airport operator monopoly so that at least
one of London's two major airports is run by a different company.

The point is that only the most stupid braindead content provider
executive
would unleash something like that upon their company by creating an
exaflood.
Personally I think the optimal solution is for a form of P2P that is
based
on published standards, with open source implementations, and relies on 
a topology guru inside each ISP's network to inject traffic policy
information
into the system.

--Michael Dillon




More information about the NANOG mailing list