Proving Gig Speed

Jon Meek meekjt at
Mon Jul 16 20:34:17 UTC 2018

On Mon, Jul 16, 2018 at 2:00 PM Chris Gross <CGross at>

> I'm curious what people here have found as a good standard for providing
> solid speedtest results to customers. All our techs have Dell laptops of
> various models, but we always hit 100% CPU when doing a Ookla speedtest for
> a server we have on site. So then if you have a customer paying for 600M or
> 1000M symmetric, they get mad and demand you prove it's full speed. At that
> point we have to roll out different people with JDSU's to test and prove
> it's functional where a Ookla result would substitute fine if we didn't
> have crummy laptops possibly. Even though from what I can see on some
> google results, we exceed the standards several providers call for.
> Most of these complaints come from the typical "power" internet user of
> course that never actually uses more than 50M sustained paying for a
> residential connection, so running a circuit test on each turn up is
> uncalled for.
> Anyone have any suggestions of the requirements (CPU/RAM/etc) for a laptop
> that can actually do symmetric gig, a rugged small inexpensive device we
> can roll with instead to prove, or any other weird solution involving
> ritual sacrifice that isn't too offensive to the eyes?

My practice is to use iperf with packet capture on both sides. The packet
capture can then be analyzed for accurate per-second, or less, throughput,
re-transmit rates, etc. This was implemented in a corporate network in
several ways including dedicated servers (that also did other monitoring),
and bootable CDs or USB sticks that a user in a small office could run on a
standard desktop. Many interesting issues were discovered with this
technique, and a fair number of perceived issues were debunked.

Here is a wrapper to run iperf + tcpdump on each side of a connection (it
could use some automation):

I originally did the analysis in Perl, but that can be fairly slow when
processing 30 seconds of packets on a saturated GigE link. If anyone is
interested there is now a C++ version along with analysis code in R at:

That version currently has only one second resolution. I have a R interface
to libpcap files that could be used for analysis at any time resolution:

I have a plan to implement the complete test environment in a Docker
container at some point. I also have a collection of small, mostly
low-cost, computers that I plan to benchmark for network throughput and
data analysis time. Some of the tiny computers can saturate a GigE link but
are very slow processing the data.


More information about the NANOG mailing list