"Does TCP Need an Overhaul?" (internetevolution, via slashdot)
Kevin Day
toasty at dragondata.com
Mon Apr 7 14:20:07 UTC 2008
On Apr 7, 2008, at 7:17 AM, Iljitsch van Beijnum wrote:
>
> On 5 apr 2008, at 12:34, Kevin Day wrote:
>
>> As long as you didn't drop more packets than SACK could handle
>> (generally 2 packets in-flight) dropping packets is pretty
>> ineffective at causing TCP to slow down.
>
> It shouldn't be. TCP hovers around the maximum bandwidth that a path
> will allow (if the underlying buffers are large enough). It
> increases its congestion window in congestion avoidance until a
> packet is dropped, then the congestion window shrinks but it also
> starts growing again.
>
> I'm sure this behavior isn't any different in the presence of SACK.
>
At least in FreeBSD, packet loss handled by SACK recovery changes the
congestion window behavior. During a SACK recovery, the congestion
window is clamped down to allow no more than 2 additional segments in
flight, but that only lasts until the recovery is complete and quickly
recovers. (That's significantly glossing over a lot of details that
probably only matter to those who already know them - don't shoot me
for that not being 100% accurate :) )
I don't believe that Linux or Windows are quite that aggressive with
SACK recovery though, but I'm less familiar there.
As a quick example on two FreeBSD 7.0 boxes attached directly over
GigE, with New Reno, fast retransmit/recovery, and 256K window sizes,
with an intermediary router simulating packet loss. A single HTTP TCP
session going from a server to client.
SACK enabled, 0% packet loss: 780Mbps
SACK disabled, 0% packet loss: 780Mbps
SACK enabled, 0.005% packet loss: 734Mbps
SACK disabled, 0.005% packet loss: 144Mbps (19.6% the speed of having
SACK enabled)
SACK enabled, 0.01% packet loss: 664Mbps
SACK disabled, 0.01% packet loss: 88Mbps (13.3%)
However, this falls apart pretty fast when the packet loss is high
enough that SACK doesn't spend enough time outside the recovery phase.
It's still much better than without SACK though:
SACK enabled, 0.1% packet loss: 48Mbps
SACK disabled, 0.1% packet loss: 36Mbps (75%)
> However, the caveat is that the congestion window never shrinks
> between two maximum segment sizes. If packet loss is such that you
> reach that size, then more packet loss will not slow down sessions.
> Note that for short RTTs you can still move a fair amount of data in
> this state, but any lost packet means a retransmission timeout,
> which stalls the session.
>
True, a longer RTT changes this effect. Same test, but instead of back-
to-back GigE, this is going over a real-world trans-atlantic link:
SACK enabled, 0% packet loss: 2.22Mbps
SACK disabled, 0% packet loss: 2.23Mbps
SACK enabled, 0.005% packet loss: 2.03Mbps
SACK disabled, 0.005% packet loss: 1.95Mbps (96%)
SACK enabled, 0.01% packet loss: 2.01Mbps
SACK disabled, 0.01% packet loss: 1.94Mbps (96%)
SACK enabled, 0.1% packet loss: 1.93Mbps
SACK disabled, 0.1% packet loss: 0.85Mbps (44%)
(No, this wasn't a scientifically valid test there, but the best I can
do for an early Monday morning)
>> You've also got fast retransmit, New Reno, BIC/CUBIC, as well as
>> host parameter caching to limit the affect of packet loss on
>> recovery time.
>
> The really interesting one is TCP Vegas, which doesn't need packet
> loss to slow down. But Vegas is a bit less aggressive than Reno
> (which is what's widely deployed) or New Reno (which is also
> deployed but not so widely). This is a disincentive for users to
> deploy it, but it would be good for service providers. Additional
> benefit is that you don't need to keep huge numbers of buffers in
> your routers and switches because Vegas flows tend to not overshoot
> the maximum available bandwidth of the path.
It would be very nice if more network-friendly protocols were in use,
but with "download optimizers" for Windows that cranks the TCP window
sizes way up, the general move to solving latency by opening more
sockets, and P2P doing whatever it can to evade ISP detection - it's
probably a bit late.
-- Kevin
More information about the NANOG
mailing list