Overall Netflix bandwidth usage numbers on a network?

Andrew Mulholland andy-nanog at bash.sh
Sat Dec 3 00:56:34 UTC 2011

Surely this is what Netflow is for.

no need to re-invent the wheel.


On Sat, Dec 3, 2011 at 12:47 AM, Jonathan Towne <jtowne at slic.com> wrote:

> Been lurking for a while and posed a question to a few folks without much
> response, figured someone here might've done something like this already.
> So, before I go about building wheels that already exist:
> I'm interested in doing a bit of a passive survey of bandwidth usage on
> my network (smallish isp, a few thousand DSL/FTTx customers) to understand
> the percentage of average/overall traffic generated by Netflix streaming.
> What I have available is a few gigabit transport switches providing me with
> mirror ports, a juniper MX series router running 10.4 code, plenty of BSD
> machines and libpcap-fu.
> What I'm looking for is either a timed-average or moments-glance number
> of the traffic.  For instance, on an interface moving 150mbit/sec total,
> 50mbit/sec of it is attributed to Netflix right now.  I'm pretty handy with
> RRDtool, so that isn't out of the question, either.
> I've really only spent dinnertime considering this, but have come up with
> two potential approaches so far, and haven't actively investigated either
> of them:
> * firewall terms and counters on the MX router + snmp
> * writing a quick libpcap application to filter and count in a completely
>  out-of-band way on one of my monitoring hosts
> Some challenges I can see:
> * Nailing down the streaming source for Netflix, that is, IP ranges etc.
> * Making assumptions about CDN source IPs that could be used for something
>  else, and further, should I care?
> Happy to hear thoughts about this, helpful or not!  I know Netflix
> themselves
> have probably done plenty of studies like this, but pretty likely not
> limited
> to my customer base.  Not aiming for anything creepy or crazy, just some
> vague understanding of what's going on, and the ability to do some trending
> for future planning.
> -- Jonathan Towne

More information about the NANOG mailing list