Question about propagation and queuing delays

Richard A Steenbergen ras at e-gerbil.net
Mon Aug 22 17:56:48 UTC 2005


On Mon, Aug 22, 2005 at 11:14:04AM -0400, David Hagel wrote:
> This is interesting. This may sound like a naive question. But if
> queuing delays are so insignificant in comparison to other fixed delay
> components then what does it say about the usefulness of all the
> extensive techniques for queue management and congestion control
> (including TCP congestion control, RED and so forth) in the context of
> today's backbone networks? Any thoughts? What do the people out there
> in the field observe? Are all the congestion control researchers out
> of touch with reality?

Queueing only matters if you are a) congested, or b) have a really slow 
circuit.

On a 33.6k modem, the delay to serialize a 1500 byte packet is something 
like 450ms. During the transmission, the pipe is effectively locked, 
causing an instantaneous congestion. You can not transmit anything else 
until that block of data is completed, even a small/quick packet like say 
an interactive SSH session. This makes interactive sessions (or chatty 
protocols) painfully slow. 

During this time, there are more packets piling up in the queue, including 
potentially more large packets which will lock the pipe up for even 
longer. Intelligent queueing can transmit the smaller/quicker packets 
already in the queue first, optimizing your interactive sessions in the 
face of high serialization and queueing delays.

This is still fairly noticable on a T1, but up much above that it becomes 
pretty insignificant. If you have a good eye and a good internal timer on 
your OS you can spot the difference between a FastE and a GigE on your 
local network ping times (usually around 0.2 to 0.3ms difference). By the 
time you get to GigE and beyond we're talking microseconds.

Of course the other reason for queue management technologies, like RED, is 
to provide better handling in the face of congestion. On a large circuit 
with many thousands of TCP flows going across of it, each acting 
independantly, tail drop in the face of congestion will tend to cause the 
TCP flows to synchronize their congestion control attempts, resulting in 
periods where they will all detect congestion and dive off, then attempt 
to scale back up and all beat the hell out of the circuit simultaniously, 
rinse wash repeat.

Obviously this is all bad, and a little application of technology can help 
you squeeze a bit more life out of a congested pipe (preventing the queue 
and thus the latency from shooting skyward and/or bouncing around all over 
the place as soon as the pipe starts to get congested), but it really has 
nothing to do with what you perceive as "latency" on a normal-state, 
modern, non-congested, long-distance Internet backbone network. Of course, 
your congested cable modem with 30ms of jitter even during "normal" 
operation trying to get to you from down the street doesn't fit the same 
model. :)

-- 
Richard A Steenbergen <ras at e-gerbil.net>       http://www.e-gerbil.net/ras
GPG Key ID: 0xF8B12CBC (7535 7F59 8204 ED1F CC1C 53AF 4C41 5ECA F8B1 2CBC)



More information about the NANOG mailing list