virtual aggregation in IETF
joelja at bogus.com
Sun Jul 20 18:22:41 UTC 2008
Paul Francis wrote:
> So, if I get you right, you are saying that edge routers have fewer CPU
> requirements, and so ISPs can get away with software routers and don't care
> about FIB.
"ISP's that can get away with software routers" Also multihomed edge
networks, the costs associated with multihoming aren't evenly
distributed. The entities most likely to get squeezed are in the middle
of the echosystem.
> At the same time, folks in the middle are saying that in any
> event they need to buy high-end routers, and so can afford to buy enough
> hardware FIB so they also don't care (much) about FIB growth.
They care, but you weren't buying 2 million entry cam routers a year ago
to deal with the growth of the DFZ. They bought them because they needed
or would need a million or more internal routes fairly shortly, which
says a lot about the complexity of a large isp. Assuming the dfz growth
continues to fit the curve it's pretty easy to figure out when you need
enough fib to support 500k dfz entries just as it was when we did the
fib bof at nanog 39...
That's not to say that fib compression is undesirable, that approach has
real benefits extending the useful life of a given platform, but there's
very little about the current situation that is unexpected, or intractable.
> Are there any folks for whom this statement isn't working?
>> -----Original Message-----
>> From: Joel Jaeggli [mailto:joelja at bogus.com]
>> Sent: Sunday, July 20, 2008 1:02 PM
>> To: Adrian Chadd
>> Cc: nanog at nanog.org
>> Subject: Re: virtual aggregation in IETF
>> Adrian Chadd wrote:
>>> On Sun, Jul 20, 2008, Joel Jaeggli wrote:
>>>> Not saying that they couldn't benefit from it, however on one hand
>>>> have a device with a 36Mbit cam on the other, one with 2GB of ram,
>>>> one fills up first?
>>> Well, the actual data point you should look at is "160k odd FIB from
>> a couple
>>> years ago can fit in under 2 megabytes of memory."
>>> The random fetch time for dynamic RAM is pretty shocking compared to
>>> cache access time, and you probably want to arrange your FIB to play
>> well with
>>> your cache.
>>> Its nice that the higher end CPUs have megabytes and megabytes of L2
>>> but placing a high-end Xeon on each of your interface processors is
>>> asking a lot. So there's still room for optimising for sensibly-
>> If you're putting it on a line card it's probably more like a RAZA XLR,
>> more memory bandwith and less cpu relative to the say the intel arch
>> That said I think you're headed to high end again. It has been
>> routinetly posited that fib growth hurts the people on the edge more
>> than in the center. I don't buy that for the reason outlined in my
>> original response. If my pps requirements are moderate my software
>> router can carry a fib of effectively arbitrary size at a lower cost
>> than carrying the same fib in cam.
>>> Of course, -my- applied CPU-cache clue comes from the act of parsing
>> HTTP requests/
>>> replies, not from building FIBs. I'm just going off the papers I've
>> read on the
>>> subject. :)
More information about the NANOG