The scale of streaming video on the Internet.

William Herrin bill at herrin.us
Fri Dec 3 15:47:44 UTC 2010


On Thu, Dec 2, 2010 at 3:28 PM, Owen DeLong <owen at delong.com> wrote:
> On Dec 2, 2010, at 12:21 PM, Leo Bicknell wrote:
>> Sunday Night Football at the top last week, with 7.1% of US homes
>> watching.  That's over 23 times as many folks watching as the 0.3% in
>> our previous math!  Ok, 23 times 150Gbps.
>>
>> 3.45Tb/s.
>>
>> Yowzer.  That's a lot of data.  345 10GE ports for a SINGLE TV show.
>
> You are assuming the absence of any of the following optimizations:
>
> 1.      Multicast
> 2.      Overlay networks using P2P services (get parts of your stream
>        from some of your neighbors).

Leo and Owen:

Thank you for reminding us to look at the other side of the problem.

If the instant problem is that the character of eyeball-level Internet
service has shifted to include a major component of data which is more
or less broadcast in nature (some with time shifting, some without).
There's a purely technical approach that can resolve it: deeply
deployed content caches.

Multicasting presents some difficult issues even with live broadcasts
and it doesn't work at all for timeshifted delivery (someone else
starts watching the same movie 5 minutes later). As for P2P...
seriously? I know a couple companies have tinkered with the idea but
even if you could get good algorithms for identifying the least
consumptive source, it still seems like granting random strangers the
use of your computer as a condition of service would get real old real
fast.

But there's a third mechanism worth considering as well: the caching proxy.

Perhaps the eyeball networks should build, standardize and deploy a
content caching system so that the popular Netflix streams (and the
live broadcast streams) can usually get their traffic from a local
source. Deploy a cache to the neighborhood box and a bigger one to the
local backend. Then organize your peering so that it's _less
convenient_ to request large bandwidths than to write your software so
it employs the content caches.

Maybe even make that a type of open peering: we'll give all comers any
sized port they want, but address-constrained so it can only talk to
our content caches.

Technology like web proxies has some obvious deficiencies. Implemented
transparently they reduce the reliability of your web access.
Implemented by configuration, finding the best proxy is a hassle.
Either way no real thought has been put in to how to determine that a
proxy is misbehaving and bypass it in a timely manner. It just isn't
as resilient as a bare Internet connection to the remote server.

But with a content cache designed to implement a near-real-time
caching protocol from the ground up, these are all solvable problems.
Use anycast to find the nearest cache and unicast to talk to it. Use
UDP to communicate and escalate lost, delayed or corrupted packets to
a higher level cache or even the remote server. Trade auth and
decryption keys with the remote server before fetching from the local
cache. And so on.


So, build a content caching system that gives you a multiplier effect
reducing bandwidth aggregates to a reasonable level. And then organize
your peering process so when technically possible, it's always more
convenient to to use your caching system than request a bigger pipe.

You'll still have to eventually address the fairness issues associated
with Network Neutrality. But having provided a reasonable technical
solution you can do it without the bugaboo of network video breathing
down your neck. And oh by the way you can deny your competitors
Netflix's business since they'll no longer need quite such huge
bandwidths after employing your technology.

Here's hoping nobody offers me a refund on my two cents...

Regards,
Bill Herrin


-- 
William D. Herrin ................ herrin at dirtside.com  bill at herrin.us
3005 Crane Dr. ...................... Web: <http://bill.herrin.us/>
Falls Church, VA 22042-3004




More information about the NANOG mailing list