No one behind the wheel at WorldCom

Phil Rosenthal pr at isprime.com
Tue Jul 16 06:10:05 UTC 2002


I've found that a regex that is longer than about 200 characters with
the format of ^1_(2|3|4|5)$ (say 20 different as numbers in the
parenthesis) can easily crash a Bigiron running the latest code.

If you were to set up a filter that only accepted updates with
^customer_(d1|d2|d3)$ d1=downstream of customer 1, it will choke with a
fairly large peer...

Don't know how the other vendors handle it.

I reported this to foundry a few weeks ago, no fix as of yet (and I
doubt there will be).

--Phil

-----Original Message-----
From: owner-nanog at merit.edu [mailto:owner-nanog at merit.edu] On Behalf Of
Pedro R Marques
Sent: Tuesday, July 16, 2002 2:44 AM
To: msa at samurai.sfo.dead-dog.com
Cc: nanog at merit.edu
Subject: Re: No one behind the wheel at WorldCom



msa at samurai.sfo.dead-dog.com (Majdi S. Abbas) writes:

 >	Actually, I think you'll find that bad data is only a small part
 > of the problem; even with good data, there isn't enough support from
> various router vendors to make it worthwhile; it's effectively
impossible  > to prefix filter a large peer due to router software
restrictions.  We  > need support for very large (256k+ to be safe)
prefix filters, and the  > routing process performance to actually
handle a prefix list this large,  > and not just one list, but many.  >
 > 	IRR support for automagically building these prefix lists would
 > be a real plus too.  Building and then pushing out filters on another
> machine can be quite time consuming, especially for a large network.
>

 From a point of view of routing software the major challenge of
handling a 256k prefix list is not actually applying it to the received
prefixes. The most popular BGP implementations all, to my knowledge,
have prefix filtering algorithms that are O(log2(N)) and which probably
scale ok... while it would be not very hard to make this a O(4)
algorithm that is probably not the issue.

Implementations do always have to do a O(log2(N)) lookup on the routing
table with a received prefix, and to afaik that is not a performance
problem for anyone.

What all implementations that i'm familiar with do have a problem with
is to actually accept the configuration of 256k lines of text to use as
a filter. Configuration parsing is typically not designed for such
numbers... it tends to work with major vendors albeith a bit slowly.

If the above disagrees with your experience please let me know.

Assuming that the bottleneck is in fact being able to parse
configuration, it begs the question what to do about it...

I'm sure all vendors will be able to, given enought incentive, optimize
their text parsing code in order to do this faster... but it begs the
question, would you actually fix anything by doing that ?

My inclination would be to think that you would just tend to move the
bottleneck to the backend systems managing the configuration of such
lists, if it isn't there already, presently.

Of course i'm completly ignorant of the backends that most of you use to
manage your systems and the above is just uneducated guessing, although
i would apreciate further education.

I would be inclined to agree with your statement that the major blame
should lie on "router vendors" if you see your router vendor as someone
that sells you the network elements + the NMS to manage it.

But in my guestimate the focal point of our search for a culprit should
be the NMS or the NMS -> router management mechanism. Idealy the latter
should be more computer friendly than text parsing.

Just an attempt to equally and democratically distribute blame around
:-)

regards,
   Pedro.






More information about the NANOG mailing list