TCP congestion control and large router buffers

Carsten Bormann cabo at tzi.org
Thu Dec 23 18:00:52 UTC 2010


Some more historical pointers:

If you want to look at the early history of the latency discussion,
look at Stuart Cheshire's famous rant "It's the Latency, Stupid"
(http://rescomp.stanford.edu/~cheshire/rants/Latency.html).  Then look
at Matt Mathis's 1997 TCP equation (and the 1998 Padhye-Firoiu version
of that): The throughput is proportional to the inverse square root of
the packet loss and the inverse RTT -- so as the RTT starts growing
due to increasing buffers, the packet loss must grow to keep
equilibrium!

We started to understand that you have to drop packets in order to
limit queueing pretty well in the late 1990s.  E.g., RFC 3819 contains
an explicit warning against keeping packets for too long (section 13).

But, as you notice, for faster networks, the bufferbloat effect can be
limited in effect by intelligent window size management, but the
dominating Windows XP was not intelligent, just limited in its widely
used default configuration.  So the first ones to fully see the effect
were the ones with many TCP connections, i.e. Bittorrent.  The modern
window size "tuning" schemes in Windows 7 and Linux break a lot of
things -- you are just describing the tip of the iceberg here.  The
IETF working group LEDBAT (motivated by the Bittorrent observations)
has been working on a scheme to run large transfers without triggering
humungous buffer growth.

Gruesse, Carsten





More information about the NANOG mailing list