The scale of streaming video on the Internet.

Christopher Morrow morrowc.lists at gmail.com
Fri Dec 3 16:08:21 UTC 2010


On Fri, Dec 3, 2010 at 10:47 AM, William Herrin <bill at herrin.us> wrote:

> If the instant problem is that the character of eyeball-level Internet
> service has shifted to include a major component of data which is more
> or less broadcast in nature (some with time shifting, some without).
> There's a purely technical approach that can resolve it: deeply
> deployed content caches.

<snip>
the above is essentially what Akamai (and likely other CDN products)
built/build... from what I understand (purely from the threads here)
Akamai lost out on the traffic-sales for NetFlix to L3's CDN. Comcast
(for this example) lost the localized in-network caching when that
happened.

Maybe L3 will chose to deploy some of their cache's into Comcast (or
other like minded networks) to make this all work out
better/faster/stronger for the whole set of participants?

> But there's a third mechanism worth considering as well: the caching proxy.

I think that's essentially what Akamai/LLNW are (not quite squid,
patrick will get all uppity about me calling the akamai boxies 'supped
up squid proxies' :) it's a simple model to keep in mind though)

Apparently Google-Global-Cache is somewhat like this as well, no?
<http://www.afnog.org/afnog2008/.../Google-AFNOG-presentation-public.pdf>

Admittedly these are 'owner specific' solutions, but they do what you
propose at the cost of a few gig links in the provider's network (or
10g links depending on the deployment) - all "local" and "cheap"
interfaces, not long-haul, and close to the consumer of the data.

> Perhaps the eyeball networks should build, standardize and deploy a
> content caching system so that the popular Netflix streams (and the
> live broadcast streams) can usually get their traffic from a local
> source. Deploy a cache to the neighborhood box and a bigger one to the
> local backend. Then organize your peering so that it's _less
> convenient_ to request large bandwidths than to write your software so
> it employs the content caches.

This brings with it an unsaid complication, the content-owner (netflix
in this example) now depends upon some 'service' in the network
(comcast in this example) to be up/operational/provisioned-properly
for a service to the end-user (comcast customer in this example), even
though NetFlix/Comcast may have no actual relationship.

Expand this to PornTube/JustinTV/etc or something similar, how do
these content owners assure (and measure and metric and route-around
in the case of deviation from acceptable numbers?)  that the SLA their
customer expects is being respected by the internediate network(s)?

How does this play if Comcast (in this example) ends up being just a
transit network for another downstream ISP ?

The owner-specific solutions today probably include some form of SLA
measurement/monitoring and problem avoidance, or I think they probably
do, Akamai I believe does at least. That sort of thing would have to
be open/available as well in the 'content owner neutral' solutions.

Oh, how do you deconflict situations where two content owners are
using the 'service' in Comcast, but one is "abusing" the service?
Should the content owners expect 'equal share'? or how does that work?
resources on the cache system are obviously at a premium, if Netflix
overruns (due to their customers demanding a more wide spread of
higher resource required content - HD 1080p streams say with a 'less
optimal' codec in use...) their share how does JustinTV deal with
this? Do they then shift their streams to more direct-to-customer and
not via the cache system? that increases their transit costs
(potentially) and the costs on Comcast at the peering locations?

-Chris




More information about the NANOG mailing list