why upload with adsl is faster than 100M ethernet ?

Iljitsch van Beijnum iljitsch at muada.com
Fri Oct 15 11:33:44 UTC 2004

On 15-okt-04, at 12:04, Joe Shen wrote:

> Your explanation on TCP behavior seems reasonable, but
> why TCP over fast access line express so much packet
> loss than slow access line ? Do WindowsXP/Win2k
> determine its startup sending window according to
> access speed or path MTU ?

I don't think there is much of a difference in the actual window size. 
But assuming 1500 byte packets, you can transmit a packet over 100 Mbps 
every 121 microseconds, while over 2 Mbps this is every 6 milliseconds. 
Now suppose that somewhere along the path there is a link that has some 
congestion. This means that most of the time, that link is busy so when 
a packet comes in, it must wait until it can be transmitted. This isn't 
much of an issue, unless so many packets come in that the buffers fill 
up. After that, any additional packets will be dropped (tail drop) 
until more packets have been transmitted and there is buffer space 

In this situation, it's likely that the 100 Mbps link will deliver 
several packets while the buffer is full, so that several successive 
packets are dropped. But the ADSL 6 ms spacing makes sure that there is 
always time for the buffer to drain between two packets. Tail drops 
will still happen, but it's less likely that several successive packets 
are dropped. And as I wrote in my previous message, TCP will slow down 
slightly for each individual packet dropped, but it will slow down 
dramatically when three successive packets are dropped.

If congestion is indeed the cause, it would be helpful to increase the 
buffer space and turn on random early discard (or detect, RED) so 
packet drops increase gradually as the buffers fill up and tail drops 
are avoided.

However, the cause can also be rate limiting. Rate limiting is deadly 
for TCP performance so it shouldn't be used on TCP traffic.

More information about the NANOG mailing list