Cisco DS3 performance specifications

David Sinn dsinn at
Tue May 8 19:36:24 UTC 2001

Any multiport cards on the GSR can have a port blocking problem if one
port is oversubscribed.  This is regardless of engine, or memory on that
card (though memory gives you more breathing room).

By default there are no queue limits for a given port on a multiport GSR
card.  So if you have a port that is in a consistent state of over
subscription, and thus packets are backing up in the local TX buffer,
that port can eat all of the memory and starve the other ports off.

It does not matter what processor is doing the switching (and
regardless, the processor *for the most part* does not touch any of the
traffic in the outbound direction anyways).  One should be cognizant
that E0's have around 400kpps of forwarding performance, E1 ~700, E2 up
to 4Mpps, and E4 up to 25Mpps, which is shared among the ports on the
card, so plan accordingly for the traffic patterns you are expecting to

The most simple fix for this is to set a hard TX queue limit for each
port.  Thus any one port can only buffer x number of packets, and not
starve off the other ports (things can be less then optimal picking x
since there are at least four pools for a packet to sit in, and you
don't get to define a number for each, just a sum).  This also insures
you aren't holding packet too terribly long (TCP should be doing it's
job anyways...).

WRED is another manner, and it is arguable if it is any more intelligent
then setting a hard limit to WHAT packet gets dropped.  WRED requires
more configuration, and is a little more graceful since it will start
dropping sooner.  Hopefully this will make your traffic curve closer to
line rate then the saw-tooth you would expect to see with TX queue


-----Original Message-----
From: Neil J. McRae [mailto:neil at DOMINO.ORG] 
Sent: Tuesday, May 08, 2001 8:45 AM
To: Bill Thomas
Cc: nanog at
Subject: Re: Cisco DS3 performance specifications

The cards are engine 0 and there is no hardware switching - I've seen
issues on this card - one port has heavy load and the other ports
start dropping packets, WRED helped with this but not by much. I'd
be interested in your tests.


> To any and all in earshot,
> I am engaged in setting up a sytems level test lab for the purposes of
> measuring
> among other items, DS3 performance and interoperability between Cisco
> series
> and several new and emerging edge switch devices.
> In order to set a baseline for performance expectations, I am
attempting to
> gather
> historical data on earlier testing.
> The nasty rumor I am hearing is that Cisco DS3 interfaces at best
deliver 50
> - 60 % throughput.
> This is supposedly attributed to the way they apply their Traffic
> and Policing at the port level. 
> I need to obtain documentation that either validates this, or presents
> correct information.
> Many of the edge devices involved in the testing, support high levels
> throughput
> even at the channelized subrate. An increased need to buffer the data,
> compensate
> for a potential mismatch in bandwidth capabilities, would just add to
> latency
> and sour the testing.
> Any and all information is valued and appreciated. 

More information about the NANOG mailing list