largest OSPF core

Leo Bicknell bicknell at
Fri Sep 3 02:02:18 UTC 2010

In a message written on Thu, Sep 02, 2010 at 09:40:39PM -0400, Christian Martin wrote:
> The most interesting point to make, however, is how much legacy
> thinking in this area continues to be stranded in a rut that emerged
> 15 years ago.  It is  not uncommon to hear network folks cringe at
> the thought of an OSPF area exceeding 100 routers.  Really?  When
> simulations using testing tools show that properly tuned OSPF
> implementations (with ISPF, PRC, etc) comprised of 1000 can run
> full SPFs in 500 ms?

I do think a lot of the thinking is out of date.  I strongly agree
that all the references I know of about scaling are based on the
CPU and RAM limitations of devices made in the 1990's.  Heck, a
"branch" router today probably has more CPU than a backbone device
of that era.

The larger issue though is that as an industry we are imprecise.
If you talk about when a routing protocol /fails/, that is can't
process the updates with the available CPU before the session times
out, you're probably talking a network of 250,000 routers on a low
end device.  Seriously, how large does a network need to be to keep
OSPF or ISIS CPU-busy for 10-20 seconds?  Huge!

Rather, we have scaling based on vauge, often unstated rules.  One
vendor publishes a white paper based on devices running only the
IGP and a stated convergence time of 500ms.  Another will assume
the IGP gets no more than 50% of the CPU, and must converge in

Also, how many people have millisecond converging IGP's, but routers
with old CPU's so BGP takes 3-5 MINUTES to converge?  Yes, for some
people that's good, if you have lots of internal VOIP or other
things; but if 99% of your traffic is off net it really doesn't
matter, you're waiting on BGP.

Lastly, the largest myth I see in IGP design is that you can't
redistribute connected or statics into your IGP, those go into BGP
so the IGP only has to deal with loopbacks.  As far as I understand
the computational complexity of OSPF and IS-IS depends solely on
the number of links running the protocol, so having these things
in or out makes no difference in that sense.  It does increase the
amount of data that needs to be transferred by the IGP's which does
slow them a bit, but with modern link speeds and CPU's it's really
a non-issue.

I'm not saying it's "smart" to redistribute connected and static,
it really does depend on your environment.  However there seems to
be a lot of folks who automatically assume the network is broken
if it has such things in the IGP, and that's just silly.  Plenty
of networks have that data in the IGP and deliver excellent routing

Fortunately we've gotten to the point where 95% of the networks
don't have to worry about these things, it works however you want
to do it.  However for the 5% who need to care, almost none of them
have engineers who actually understand the programming behind the
protocols.  How many network architects could write an OSPF
implementation or understand your boxes architecture?

       Leo Bicknell - bicknell at - CCIE 3440
        PGP keys at
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 826 bytes
Desc: not available
URL: <>

More information about the NANOG mailing list