Network end users to pull down 2 gigabytes a day, continuously?

Andrew Odlyzko odlyzko at dtc.umn.edu
Sat Jan 6 15:09:19 UTC 2007


A remark and a question:

1.  2 GB/day per user would indeed require tossing everyone's CURRENT
baseline network usage metrics out the window, IF IT WERE TO BE ACHIEVED
INSTANTANEOUSLY.  The key question is, how quickly and widely will this 
application spread?  

Back in 1997, when I first started collecting Internet usage statistics,
there were concerns that pre-fetching applications like WebWhacker (anyone
remember that?) would lead to a collapse of networks and business plans.
With flat rate dial access, staying connected for 24 hours per day would
have (i) exhausted the modem pools, which were built on a 5-10 oversubscription
ratio, and (ii) broken the aggregation and backbone networks, generating
about 240 MB/day or traffic per subscriber (on a 19.2 Kbps modem, about
standard then).  But the average user was online just 1 hour per day, and
download traffic was about 2 Kbps during that hour, leading to about 1 MB/day
of traffic, and the world did not come to a halt.  (And yes, I am suppressing
some details, such as ISPs TOSs forbidding applications like WebWhacker, and 
technical measures to keep them limited.)

Today, download rates per broadband subscriber range (among the few industrialized
countries for which I have data or at least decent estimates) from about 60 MB in
Australia to 1 GB in Hong Kong.  So 2 GB/day is not that far out of range for
Hong Kong (or South Korea) even today.  And in a few years (which is what you
always have to allow for, even Napster and Skype did not take over the world
in the proverbial "Internet time" of 8 months or less), other places might
catch up.

2.  The question I don't understand is, why stream?  In these days, when a
terabyte disk for consumer PCs is about to be introduced, why bother with
streaming?  It is so much simpler to download (at faster than real-time rates,
if possible), and play it back.

Andrew






  > On Sat, 6 Jan 2007, Marshall Eubanks wrote:

  Note that 220 MB per hour (ugly units) is 489 Kbps, slightly less =20
  than our current usage.

  > The more popular the content is, the more sources it can be pulled =20
  > from
  > and the less redundant data we send, and that number can be as low as
  > 220MB per hour viewed. (Actually, I find this a tough thing to explain
  > to people in general; it's really counterintuitive to see that more
  > peers =3D=3D less bandwidth - I'm still searching for a useful =
  user-facing
  > metaphor, anyone got any ideas?).

  Why not just say, the more peers, the more efficient it becomes as it =20=

  approaches the
  bandwidth floor set by the chosen streaming  ?

  Regards
  Marshall

  On Jan 6, 2007, at 9:07 AM, Colm MacCarthaigh wrote:

  >
  > On Sat, Jan 06, 2007 at 03:18:03AM -0500, Robert Boyle wrote:
  >> At 01:52 AM 1/6/2007, Thomas Leavitt <thomas at thomasleavitt.org> =20
  >> wrote:
  >>> If this application takes off, I have to presume that everyone's
  >>> baseline network usage metrics can be tossed out the window...
  >
  > That's a strong possibility :-)
  >
  > I'm currently the network person for The Venice Project, and busy
  > building out our network, but also involved in the design and planning
  > work and a bunch of other things.
  >
  > I'll try and answer any questions I can, I may be a little =20
  > restricted in
  > revealing details of forthcoming developments and so on, so please
  > forgive me if there's later something I can't answer, but for now I'll
  > try and answer any of the technicalities. Our philosophy is to pretty
  > open about how we work and what we do.
  >
  > We're actually working on more general purpose explanations of all =20
  > this,
  > which we'll be putting on-line soon. I'm not from our PR dept, or a
  > spokesperson, just a long-time NANOG reader and ocasional poster
  > answering technical stuff here, so please don't just post the archive
  > link to digg/slashdot or whatever.
  >
  > The Venice Project will affect network operators and we're working =20
  > on a
  > range of different things which may help out there.  We've designed =20=

  > our
  > traffic to be easily categorisable (I wish we could mark a DSCP, =20
  > but the
  > levels of access needed on some platforms are just too restrictive) =20=

  > and
  > we know how the real internet works. Already we have aggregate per-AS
  > usage statistics, and have some primitive network proximity =20
  > clustering.
  > AS-level clustering is planned.
  >
  > This will reduce transit costs, but there's not much we can do for =20
  > other
  > infrastructural, L2 or last-mile costs. We're L3 and above only.
  > Additionally, we predict a healthy chunk of usage will go to our "Long
  > tail servers", which are explained a bit here;
  >
  > 	http://www.vipeers.com/vipeers/2007/01/venice_project_.html
  >
  > and in the next 6 months or so, we hope to turn up at IX's and arrange
  > private peerings to defray the transit cost of that traffic too.
  > Right now, our main transit provider is BT (AS5400) who are at some
  > well-known IX's.
  >
  >> Interesting. Why does it send so much data?
  >
  > It's full-screen TV-quality video :-) After adding all the overhead =20=

  > for
  > p2p protocol and stream resilience we still only use a maximum of =20
  > 320MB
  > per viewing hour.
  >
  > The more popular the content is, the more sources it can be pulled =20
  > from
  > and the less redundant data we send, and that number can be as low as
  > 220MB per hour viewed. (Actually, I find this a tough thing to explain
  > to people in general; it's really counterintuitive to see that more
  > peers =3D=3D less bandwidth - I'm still searching for a useful =
  user-facing
  > metaphor, anyone got any ideas?).
  >
  > To put that in context; a 45 minute episode grabbed from a file-=20
  > sharing
  > network will generally eat 350MB on-disk, obviously slightly more is
  > used after you account for even the 2% TCP/IP overhead and p2p =20
  > protocol
  > headers. And it will usually take longer than 45 minutes to get there.
  >
  > Compressed digital telivision works out at between 900MB and 3GB an =20=

  > hour
  > viewed (raw is in the tens of gigabytes). DVD is of the same order.
  > YouTube works out at about 80MB to 230MB per-hour, for a mini-screen
  > (though I'm open to correction on that, I've just multiplied the
  > bitrates out).
  >
  >> Is it a peer to peer type of system where it redistributes a portion
  >> of the stream as you are viewing it to other users?
  >
  > Yes, though not neccessarily as you are viewing it. A proportion of =20=

  > what
  > you have viewed previously is cached and can be made available to =20
  > other
  > peers.
  >
  > --=20
  > Colm MacC=E1rthaigh                        Public Key: colm=20
  > +pgp at stdlib.net




More information about the NANOG mailing list