Fast TCP?

Deepak Jain deepak at ai.net
Thu Jun 5 03:41:22 UTC 2003



> Glad this came up as I have been reading this paper -
>
> Does Figure 1 in
> > http://netlab.caltech.edu/pub/papers/fast-030401.pdf
>
> seem reasonable ? Will 100 RED TCP flows really only fill 90% of a 155
> Mbps pipe but  87% of a 2.4 Gbps connection
> and 75% of a 4.8 Gbps connection ? This seems strangely non-linear to
> me.
>
> A more fundamental question is, is this really useful except in the
> case of very high bandwidth single flows (such as
> e-VLBI or  particle physics or uncompressed HDTV).
> After all, isn't the current standard practice not to come close to
> fully utilizing backbone bandwidth ?

I think the idea is that (similar to the 1Gb/s single-stream test a few
months ago) that the concerns of academics are not exactly inline with those
of network operators. The idea with a non-stablized TCP Vegas on a very fast
pipe [with a small number of streams] is that as delays get large (relative
to the size of the network connection) you have a very long/impossible
window to grow into to fully utilize the full bandwidth. With TCP Reno
(which it seems they have the biggest fault with) a single packet drop
causes far more severe problems. Since RED causes packet drops, high speed
streams that get RED'd are in an immense world of pain. Further, since a
typically delayed ack window is only 100ms, this is a lot of data that isn't
transmitted over the network or retransmitted and resequenced.

If you have many streams (where each one represents a small portion of your
network link, whether backbone or CPE) you can easily fill your pipe, this
is common experience. If you aren't using RED [or similar] to manage
congestion, you are good with a smaller number of streams. When you have a
single (or small number of streams) you need larger windows, more tolerance
for latency, and a large willingness to buffer data rather than drop it. I
think this is all well understood at a common-sense level.

I think the academics (practice, not people) are the ones that will figure
out some idealized set of variables for a slightly modified equation from
the ones we all use wrt to bits-in-flight calculations. I think they mention
in the paper that they will start by stablizing TCP Vegas for a high
latency, high speed link. I could be wrong (about my understanding or what
is considered common-sense).

I am not sure why sending a single large/high speed stream today (>1Gb/s) is
such an improvement over sending multiple today-streams of data, but I guess
that is the difference between a get-it-done-right and a get-it-done-now
mentality.

Deepak Jain
AiNET





More information about the NANOG mailing list