Network end users to pull down 2 gigabytes a day, continuously?

Colm MacCarthaigh colm at stdlib.net
Sun Jan 7 13:50:08 UTC 2007


On Sat, Jan 06, 2007 at 08:46:41PM -0600, Frank Bulk wrote:
> What does the Venice project see in terms of the number of upstreams
> required to feed one view, 

At least 3, but more can participate to improve resilience against
partial stream loss.

> and how much does the size of upstream pipe affect this all?  

If the application doesn't have enough upstream bandwidth to send a
proportion of the stream, then it won't. Right now, even if there was
infinite upstream bandwidth, there are hard-coded limits, we've been
changing these slightly as we put the application through more and more
QA. I think right now it's still limited to at most ~220Kbit/sec,
that's what I see on our test cluster, but I'll get back to you if I'm
wrong.

Detecting upstream capacity with UDP streams is always a bit hard, we
don't want to flood the link, we have other control traffic
(renegotiating peers, grabbing checksums, and so on) which needs to keep
working so that video keeps playing smoothly for the user, which is
what matters more.

If the application is left running long enough with good upstream, it
may elect to become a supernode, but that is control traffic only,
rather than streaming. Our supernodes are not relays, they act as
coordinators of peers. I don't have hard data yet on how much bandwidth
these use, because it depends on how often people change channels,
fast-forward and that kind of thing but our own supernodes which
presently manage the entire network use about 300 Kbit/sec. 

But once again, the realities of the internet mean that in order to
ensure a good user experience, we need to engineer against the lowest
common denominator, not the highest. So if the supernode bandwidth
creeps up, we may have to look at increasing the proportion of
supernodes in the network to bring it back down again, so that
packet-loss from supernodes doesn't become an operational problem.

> Do you see trends where 10 upstreams can feed one view if
> they are at 100 kbps each as opposed to 5 upstreams and 200 kbps each, or is
> it no tight relation?  

We do that now, though our numbers are lower :-)

> Supposedly FTTH-rich countries contribute much more
> to P2P networks because they have a symmetrical connection and are more
> attractive to the P2P clients.  
> 
> And how much does being in the same AS help compare to being geographically
> or hopwise apart?

That we don't yet know for sure. I've been reading a lot of research on
it, and doing some experimentation, but there is a high degree of
correlation between intra-AS routing and lower latency and greater
capacity. Certainly a better correlation than geographic proximity. 

Using AS proximity is definitely a help for resilience though, same-AS
sources and adjacent AS sources are more likely to remain reachable in
the event of transit problems, general BGP flaps and so on. 

-- 
Colm MacCárthaigh                        Public Key: colm+pgp at stdlib.net



More information about the NANOG mailing list