No subject

Matt Mathis mathis at pele.psc.edu
Thu Oct 27 19:13:37 UTC 1994


Thanks for the notes!

My comments are belated responses to the participants, since I was unable to
be at the the meeting.

>Throughput degraded with the TCP window size is greater than 13000 bytes.

We never use a maximum window size this small.  Our system default is 32k
which is 1.5 times the actual pipe size for a T1 connected site at 120 mS.
This is near optimal for typical users on the west coast, who are one or two
T1 hops away from the current NSFnet.  This is slightly too aggressive
for typical users on the East coast.  But it is an order of magnitude too
small for many of our users in Boston, San Francisco, Champaign-Urbana, etc.
furthermore, I believe that a number of vendors are shipping workstations with
large default window sizes, including SGI IRIX_52 on all platforms and OSF/1
for the DEC Alpha.  A 13000 byte maximum window size is insufficient.

I would like to "second" Curtis' remarks about the impact of round trip delay
on traffic burstyness.  The essence of the problem is that TCP controls the
total amount of data out in the network, but has no control over the
distribution of data within one round trip time.  Slow start and the
"turbulence" effects discussed in Lixia Zhang's paper on two way traffic
(sigcomm'92) tend to maximize this burstyness.

I have recently become aware of a weaker criteria for success that should also
be considered.

If you imagine a infinite bandwidth network with finite delay and loss rates,
TCP will run at some finite rate determined by the delay, MTU and loss rate.

A quick, back of the envelope calculation (neglecting many possibly important
terms) yields:

BW = MTU/RTT * sqrt(1.5/Loss)

Or for the BW to be congestion controlled

Loss <  1.5 * (MTU/RTT/BW)**2   (please excuse the fortran ;-)

So for Curtis to reach 40 Mb/s with a 4k MTU and 70 mS RTT, the TOTAL
END-to-END loss must have been less than 0.02% of the packets.  Since each
packet would be about 1000 cells.....

To reach 10 Mb/s with a 1500 Byte MTU, the same path needs to have better than
a 0.05% end-to-end loss rate.  PSC did a demo with LBL at NET'91, on a two
month old T3 NSFnet (actually running at half T3) where we achieved near these
rates (the RTT was only 20 mS so the loss might have been as high as 0.12%.)

Practical experience suggests that this calculation is not pessimistic enough,
and that actual loss rates must be significantly better.  For one thing it
assumes an absolutely state of the art TCP (taho is not good enough!),
otherwise performance drops by at least an order of magnitude.

--MM--





More information about the NANOG mailing list