Tail Drops and TCP Slow Start

Rodney Dunn rodunn at cisco.com
Mon Dec 10 19:48:15 UTC 2001



On Fri, Dec 07, 2001 at 11:12:39AM -0600, Murphy, Brennan wrote:
> 
> If I have a DS3 or OC3 handling mounds and mounds of FTP download traffic,
> what is the easiest way to detect if the bandwidth in use is falling into
> a classic Tail Drop pattern?  According to a Cisco book I am reading, the
> bandwidth utilization should graph in a "sawtooth" pattern of gradual
> increases in accordance with multiple machines gradually increasing
> via TCP slow start and then sharp drops. Will this only happen when
> the utilization approaches 100%. (maybe dumb question)

It could be either/or.  If the link is oversubscribed you may see what
you are describing via the 'bits/sec' counter in 'sh int'.  Turn the timers
down via 'load-interval' to get a more granular timeframe.  This link
could be at 50% utilization but the upstream link feeding it running at maxiumum
capacity so the 50% you see locally would experience the same behavior.

> 
> Should I be able to do a show buffers and see misses or is there some
> better way to detect other than via graphing?  

'sh buffers' really isn't what you want to look at.  The 'bits/sec' counter
is more inline with the throughput on the interface.  Turn the load interval
down for better granularity.  If you are seeing buffer misses there are usually
other issues going on like very bursty traffic or other resource contention.
Typically buffer misses are seen more on LAN segments and I don't usually
recommend changing the defaults because most of the time there is some other
underlying issue that tuning the buffers is hacking around.

> 
> Also, suppose in examining my ftp traffic patterns that I noticed that it
> spikes at 15minutes after the type of the hour, consistently, etc.
> Could I create a timed access list to only kick in at that time?  
> Anyone have experience with WRED to handle ftp congestion?

It's more of a dynamic thing than that.  WRED will smooth out the curve for
you if the link you are working on is the source of the problem.  What were
you suggesting to do with the ACL anyway if it did kick in?

Say for example you see the rate vary on a DS3 from 30M to 45M in a sawtooth
manner.  After applying WRED the high and low points of the peaks should
be less and monitoring the throughput on the interface should show it stay
consistently closer to linerate for that circuit.

> 
> I usually take these types of questions to Cisco but I thought I'd post
> it to this list to get any generic real world advice. 

This comes from lab testing and real world experience.

hth,
rodney


> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> sh buff
> Buffer elements:
>      499 in free list (500 max allowed)
>      5713661 hits, 0 misses, 0 created
> 
> Public buffer pools:
> Small buffers, 104 bytes (total 600, permanent 600):
>      580 in free list (20 min, 1250 max allowed)
>      2225528470 hits, 6 misses, 18 trims, 18 created
>      0 failures (0 no memory)
> Middle buffers, 600 bytes (total 450, permanent 450):
>      448 in free list (10 min, 1000 max allowed)
>      68259213 hits, 7 misses, 21 trims, 21 created
>      0 failures (0 no memory)
> Big buffers, 1524 bytes (total 450, permanent 450):
>      449 in free list (5 min, 1500 max allowed)
>      6807747 hits, 0 misses, 0 trims, 0 created
>      0 failures (0 no memory)
> VeryBig buffers, 4520 bytes (total 50, permanent 50):
>      50 in free list (0 min, 1500 max allowed)
>      46167681 hits, 0 misses, 0 trims, 0 created
>      0 failures (0 no memory)
> Large buffers, 5024 bytes (total 50, permanent 50):
>      50 in free list (0 min, 150 max allowed)
>      0 hits, 0 misses, 0 trims, 0 created
>      0 failures (0 no memory)
> Huge buffers, 18024 bytes (total 5, permanent 5):
>      5 in free list (0 min, 65 max allowed)
>      34 hits, 6 misses, 12 trims, 12 created
>      0 failures (0 no memory)
> 
> Interface buffer pools:
> IPC buffers, 4096 bytes (total 768, permanent 768):
>      768 in free list (256 min, 2560 max allowed)
>      769236774 hits, 0 fallbacks, 0 trims, 0 created
>      0 failures (0 no memory)
> 
> Header pools:



More information about the NANOG mailing list