Interface error_rates question

John Kristoff jtk at
Fri Jan 4 01:32:36 UTC 2002

On Thu, Jan 03, 2002 at 08:55:13AM -0500, Holmes, Daniel wrote:
> Many tools currently calculate 1% and above (so if you have .9% it is displayed as 0%) but it is quite possible users may want to measure fractional percentages as well. 
> Does anyone have any opinions/preferences based on your current experience?

This is a good question.  First however, what is the time frame you're
measuring?  If it is short, fractions of a percent may matter.  If it
is long, a burst of errors due to a fairly severe may be appear to be
spread out.

> Does it matter the type of interface you are managing (Ethernet, serial, etc.)?

Absolutely.  It also depends on the media the properties of the link
path.  I don't think there is a medium available that is infinitely
reliable so you're bound to have at least a few bit errors somewhere.

It also depends on the protocols used.  With SNA for example, errors,
even small amounts would matter.  Or to use a more modern example,
applications using UDP and no form of upper layer recovery, even low
error rates may significantly decrease overall performance.

Probably more important than the instaneous measure of errors would be
the rate of change over time.

Modern, reliable links should generate relatively few errors, but
it won't be zero.  It probably wouldn't hurt to go at least two
decimal places for modern links (my guess).  If nothing else, it
will help create a more accurate baseline of where errors are at.
Providing the management apps don't do that, perhaps they should at
at least tell you the total number of errors over the total number
of bits, bytes or packets.


More information about the NANOG mailing list