Network end users to pull down 2 gigabytes a day, continuously?
frnkblk at iname.com
Sun Jan 7 02:46:41 UTC 2007
What does the Venice project see in terms of the number of upstreams
required to feed one view, and how much does the size of upstream pipe
affect this all? Do you see trends where 10 upstreams can feed one view if
they are at 100 kbps each as opposed to 5 upstreams and 200 kbps each, or is
it no tight relation? Supposedly FTTH-rich countries contribute much more
to P2P networks because they have a symmetrical connection and are more
attractive to the P2P clients.
And how much does being in the same AS help compare to being geographically
or hopwise apart?
From: owner-nanog at merit.edu [mailto:owner-nanog at merit.edu] On Behalf Of Colm
Sent: Saturday, January 06, 2007 8:08 AM
To: Robert Boyle
Cc: Thomas Leavitt; nanog at merit.edu
Subject: Re: Network end users to pull down 2 gigabytes a day, continuously?
On Sat, Jan 06, 2007 at 03:18:03AM -0500, Robert Boyle wrote:
> At 01:52 AM 1/6/2007, Thomas Leavitt <thomas at thomasleavitt.org> wrote:
> >If this application takes off, I have to presume that everyone's
> >baseline network usage metrics can be tossed out the window...
That's a strong possibility :-)
I'm currently the network person for The Venice Project, and busy
building out our network, but also involved in the design and planning
work and a bunch of other things.
I'll try and answer any questions I can, I may be a little restricted in
revealing details of forthcoming developments and so on, so please
forgive me if there's later something I can't answer, but for now I'll
try and answer any of the technicalities. Our philosophy is to pretty
open about how we work and what we do.
We're actually working on more general purpose explanations of all this,
which we'll be putting on-line soon. I'm not from our PR dept, or a
spokesperson, just a long-time NANOG reader and ocasional poster
answering technical stuff here, so please don't just post the archive
link to digg/slashdot or whatever.
The Venice Project will affect network operators and we're working on a
range of different things which may help out there. We've designed our
traffic to be easily categorisable (I wish we could mark a DSCP, but the
levels of access needed on some platforms are just too restrictive) and
we know how the real internet works. Already we have aggregate per-AS
usage statistics, and have some primitive network proximity clustering.
AS-level clustering is planned.
This will reduce transit costs, but there's not much we can do for other
infrastructural, L2 or last-mile costs. We're L3 and above only.
Additionally, we predict a healthy chunk of usage will go to our "Long
tail servers", which are explained a bit here;
and in the next 6 months or so, we hope to turn up at IX's and arrange
private peerings to defray the transit cost of that traffic too.
Right now, our main transit provider is BT (AS5400) who are at some
> Interesting. Why does it send so much data?
It's full-screen TV-quality video :-) After adding all the overhead for
p2p protocol and stream resilience we still only use a maximum of 320MB
per viewing hour.
The more popular the content is, the more sources it can be pulled from
and the less redundant data we send, and that number can be as low as
220MB per hour viewed. (Actually, I find this a tough thing to explain
to people in general; it's really counterintuitive to see that more
peers == less bandwidth - I'm still searching for a useful user-facing
metaphor, anyone got any ideas?).
To put that in context; a 45 minute episode grabbed from a file-sharing
network will generally eat 350MB on-disk, obviously slightly more is
used after you account for even the 2% TCP/IP overhead and p2p protocol
headers. And it will usually take longer than 45 minutes to get there.
Compressed digital telivision works out at between 900MB and 3GB an hour
viewed (raw is in the tens of gigabytes). DVD is of the same order.
YouTube works out at about 80MB to 230MB per-hour, for a mini-screen
(though I'm open to correction on that, I've just multiplied the
> Is it a peer to peer type of system where it redistributes a portion
> of the stream as you are viewing it to other users?
Yes, though not neccessarily as you are viewing it. A proportion of what
you have viewed previously is cached and can be made available to other
Colm MacCárthaigh Public Key: colm+pgp at stdlib.net
More information about the NANOG