High throughput bgp links using gentoo + stipped kernel
philfagan at gmail.com
Sun May 19 17:34:59 UTC 2013
On May 19, 2013 10:20 AM, "Nick Khamis" <symack at gmail.com> wrote:
> On 5/19/13, Zachary Giles <zgiles at gmail.com> wrote:
> > I had two Dell R3xx 1U servers with Quad Gige Cards in them and a few
> > BGP connections for a few year. They were running CentOS 5 + Quagga with
> > bunch of stuff turned off. Worked extremely well. We also had really
> > traffic back then.
> > Server hardware has become amazingly fast under-the-covers these days. It
> > certainly still can't match an ASIC designed solution from Cisco etc, but
> > it should be able to push several GB of traffic.
> > In HPC storage applications, for example, we have multiple servers with
> > Quad 40Gig and IB pushing ~40GB of traffic of fairly large blocks. It's
> > network, but it does demonstrate pushing data into daemon applications
> > back down to the kernel at high rates.
> > Certainly a kernel routing table with no iptables and a small Quagga
> > in the background can push similar.
> > In other words, get new hardware and design it flow.
> What we are having a hard time with right now is finding that
> "perfect" setup without going the whitebox route. For example the
> x3250 M4 has one pci-e gen 3 x8 full length (great!), and one gen 2
> x4 (Not so good...). The ideal in our case would be a newish xserver
> with two full length gen 3 x8 or even x16 in a nice 1u for factor
> humming along and being able to handle up to 64 GT/s of traffic,
> firewall and NAT rules included.
> Hope this is not considered noise to an old problem however, any help
> is greatly appreciated, and will keep everyone posted on the final
> numbers post upgrade.
More information about the NANOG