multi-homing fixes

Roeland Meyer rmeyer at mhsc.com
Fri Aug 24 06:50:38 UTC 2001


|> From: Adam Rothschild [mailto:asr at latency.net]
|> Sent: Thursday, August 23, 2001 10:36 PM
|> 
|> On Thu, Aug 23, 2001 at 03:23:24PM -0700, Roeland Meyer wrote:

|> > At $99US for 512MB of PC133 RAM (the point is, RAM is disgustingly
|> > cheap and getting cheaper), more RAM in the routers is a quick
|> > answer. Router clusters are another answer, and faster CPUs are yet
|> > another.
|> 
|> Throwing more RAM and CPU into our routers (assuming for a 
|> moment that
|> they're most certainly all Linux PC's running Zebra) is not the
|> solution you're looking for; the problem of RIB processing still
|> remains.
|> 
|> Getting a forwarding table requires extracting data from the RIB, and
|> this is the problem, because RIBs are very large and active, and are
|> being accessed by lots of reading and writing processes.  RIB
|> processing is substantial, and is only getting worse.

SMP systems and multi-ported RAM is a good enough stop-gap. If I didn't like
non-deterministic systems, I might suggest Echelon technologies
(hardware-based neural nets).

|> > If the IETF is being at all effective, that should start now and
|> > finish sometime next year, so that we can start the 5-year
|> > technology roll-out cycle.
|> 
|> Roeland, The IETF is eagerly awaiting your solution.  Send code.  See
|> Tony Li's presentation at the Atlanta NANOG on why this solution of
|> jamming RAM and CPU into boxes is not a long term viable answer:
|> 
|>   <http://www.nanog.org/mtg-0102/witt.html>

I've read that and largely agree. The hardware approach was only meant to
buy time, while the geniuses at the IETF find a better approach. What I
don't agree on, and am amazed to see, the admission that they don't know at
what point the convergeince problem becomes intractible. Or even, if it
does... that sounds more like a fundimental lack of understanding of the
algorithm itself. 

|> In short, state growth at each level must be constrained and must not
|> outstrip Moore's law, and to be viable in an economic sense, it must
|> lag behind Moore's law. 

In the mid-80's, I worked on an OCR problem, involving a add-on 80186
processor card. We used a brute-force solution. It as too slow on the 8 MHz
CPU. Years later, with the advent of faster hardware, the product was
released. It's funny that the market timing was just about perfect. It gave
that company a huge head start, when the market turned hot. It is alright to
target performance/capacity solutions expected to be present at the time of
product release (about 5-years from now). In fact, that's about the only way
I see the problem getting solved.




More information about the NANOG mailing list