Best utilizing fat long pipes and large file transfer
deepak at ai.net
Fri Jun 13 17:37:39 UTC 2008
Robert Boyle wrote:
> At 12:01 PM 6/13/2008, Kevin Oberman wrote:
>> Clearly you have failed to try very hard or to check into what others
>> have done. We routinely move data at MUCH higher rates over TCP at
>> latencies over 50 ms. one way (>100 ms. RTT). We find it fairly easy to
>> move data at over 4 Gbps continuously.
> That's impressive.
>> If you can't fill a GE to 80% (800 Mbps) at 30 ms, you really are not
>> tying very hard. Note: I am talking about a single TCP stream running
>> for over 5 minutes at a time on tuned systems. Tuning for most modern
>> network stacks is pretty trivial. Some older stacks (e.g. FreeBSD V6)
>> are hopeless. I can't speak to how Windows does as I make no use of it
>> for high-speed bulk transfers.
> Let me refine my post then...
> In our experience, you can't get to line speed with over 20-30ms of
> latency using TCP on _Windows_ regardless of how much you tweak it. >99%
> of the servers in our facilities are Windows based. I should have been
> more specific.
I'll stipulate that I haven't looked too deeply into this problem for
But I can't imagine it would be too hard to put a firewall/proxy (think
Socks) and set the FW/proxy to adjust (or use an always on, tuned,
tunnel) the TCP settings between the two FW/proxies on either side of
It has reasonably little invasion or reconfiguration and is probably
More information about the NANOG