Testing 1gbps bandwidth

Fred Baker (fred) fred at cisco.com
Tue Aug 14 20:22:23 UTC 2012

On Aug 14, 2012, at 4:40 AM, <valdis.kletnieks at vt.edu>
 <valdis.kletnieks at vt.edu> wrote:

> On Tue, 14 Aug 2012 15:32:47 +0400, Luqman Kondeth said:
>> Is anyone aware of any public IPerf servers in the middle east or close
>> by?(Europe) or anywhere that can do udp?. I have a 1gbps Internet link
>> which I've been asked to show that it has 1gbps download speeds.
> First thing that comes to mind is remembering the difference between
> end-to-end throughput and the throughput across one link in the chain.
> If you really need to validate the one link, you probably need to get some
> system to inject packets at the other end of the link.

You might take a look at http://www.ameinfo.com/broadband_speed_checker/. I can't say I know anything about them beyond what Google says they say about themselves, but they claim to be able to test such things.

Let me put hands and feet on what Valdis points out. With a gigabit interface, you are able to carry about 83,333 1500 byte packets per second. If you're trying to download a file from, say, an Akamai server, TCP will allow you to move one window per round trip. If you are using standard window scaling (e.g., your window is in the neighborhood of 65,000 bytes), you can achieve 1 GBPS only if your round trip time is in the neighborhood of half a millisecond. Outside of a data center, such an RTT is Really Unusual. The obvious alternative is to use a larger window scaling value: if your RTT is 20 ms, scale up by at least 40 times, which is to say a shift of 6 bits for a multiplier of 64. Even with that, TCP's normal way of operating will prevent it from using the entire gigabit until quite a way into the session. You'll need a Really Long File.

The reason you get such an interface, I would imagine, is that you have a large number of users behind that interface and/or you are routinely moving a large amount of data. You can make it easier for yourself if you get a large number of your users to each download something really large all at the same time, and measure the performance at the interface.

Or, and this is a lot easier but involves math, you can turn on wireshark/Netflow/tcpdump/something that will record actual throughput, and download a file of your choosing. Later, offline, you can determine that you moved some number of bytes within some unit of time and the ratio is 1 GBPS, although you only ran the test for 20 ms or whatever.

Even those have caveats; upstream, you're sharing a link within your ISP with someone else. It's just possible that while your link will happily carry 1 GBPS, at the instant you test, the upstream link gets hit with some heavy load and AT THAT INSTANT only has 750 MBPS for you, making your link look like it only supports 750 MBPS. That would be possible in any of the tests I just mentioned.

What Valdis is suggesting is to have someone at your ISP literally connect to their router and send you traffic at a 1 GBPS or faster rate for a period of time, while you record that with wireshark/netflow/etc. You can then do the math and record the result.

More information about the NANOG mailing list