Link capacity upgrade threshold

Richard A Steenbergen ras at
Sun Aug 30 13:46:53 CDT 2009

On Sun, Aug 30, 2009 at 01:03:35PM -0400, Patrick W. Gilmore wrote:
> >Also, a gig link on a Cisco will do approx 93-94% of imix of a gig  
> >in the values presented via SNMP (around 930-940 megabit/s as seen  
> >in "show int") before it's full, because of IFG, ethernet header  
> >overhead etc.
> I've heard this said many times.  I've also seen 'sho int' say  
> 950,000,000 bits/sec and not see packets get dropped.  I was under the  
> impression "show int" showed -every- byte leaving the interface.  I  
> could make an argument that IFG would not be included, but things like  
> ethernet headers better be.
> Does this change between IOS revisions, or hardware, or is it old  
> info, or ... what?

Actually Cisco does count layer 2 header overhead in its snmp and show
int results, it is Juniper who does not (for most platforms at any rate)
due to their hw architecture. I did some tests regarding this a while
back on j-nsp, you'll see different results for different platforms and
depending on whether you're looking at the tx or rx. Also you'll see
different results for vlan overhead and the like, which can further
complicate things.

That said, "show int" is an epic disaster for a significantly large 
percentage of the time. I've seen more bugs and false readings on that 
thing than I can possibly count, so you really shouldn't rely on it for 
rate readings. The problem is extra special bad on SVIs, where you might 
see a reading that is 20% high or low from reality at any given second, 
even on modern code. I'm not aware of any major issues detecting drops 
though, so you should at least be able to detect them when they happen 
(which isn't always at line rate). If you're on a 6500/7600 platform 
running anything SXF+ try "show platform hardware capacity interface" to 
look for interfaces with lots of drops globally.

Richard A Steenbergen <ras at>
GPG Key ID: 0xF8B12CBC (7535 7F59 8204 ED1F CC1C 53AF 4C41 5ECA F8B1 2CBC)

More information about the NANOG mailing list