scaling linux-based router hardware recommendations

Philip disordr at gmail.com
Wed Jan 28 22:58:05 UTC 2015


I recently built a pair of Linux based "routers" to handle full BGP tables
from 3 upstream providers (10gig links)
I had penguincomputing.com build me two reasonably powerful (dual xeon hex
core processor) servers with SolarFlare
<http://solarflare.com/1040GbE-Flareon-Server-IO-Adapters> NICs. (I didn't
get a chance to play with open-onload before moving on to a new opportunity)
Rudimentary testing with iperf showed I could saturate a 10gig link with
minimal system load.

With real world traffic, the limits came when we started pushing packets in
the several hundred thousand range. However, this was due to the fact that
these "routers" were also doing firewall / NAT duty (iptables),
load-balancing (haproxy), VPN endpoints (openvpn), plus the routing eBGP
(quagga), and internally propagating OSPF routes as well (quagga).
Interrupt handling / system load became a problem only when our hadoop
cluster (200+ nodes) started crazy aws s3 communications, otherwise things
ran pretty well.

The systems, configurations and software were pretty much just hacked
together by me. Ideally we would have bought Juniper / Cisco gear, but my
budget of $50K wouldn't even buy half a router after my vendors were done
quoting me the real stuff.
I ended up spending ~$15K to build this solution. I'm a not a networking
person though, just a Linux hack, but was able to get this solution working
reliably.

-Philip












On Mon, Jan 26, 2015 at 2:53 PM, micah anderson <micah at riseup.net> wrote:

>
> Hi,
>
> I know that specially programmed ASICs on dedicated hardware like Cisco,
> Juniper, etc. are going to always outperform a general purpose server
> running gnu/linux, *bsd... but I find the idea of trying to use
> proprietary, NSA-backdoored devices difficult to accept, especially when
> I don't have the budget for it.
>
> I've noticed that even with a relatively modern system (supermicro with
> a 4 core 1265LV2 CPU, with a 9MB cache, Intel E1G44HTBLK Server
> adapters, and 16gig of ram, you still tend to get high percentage of
> time working on softirqs on all the CPUs when pps reaches somewhere
> around 60-70k, and the traffic approaching 600-900mbit/sec (during a
> DDoS, such hardware cannot typically cope).
>
> It seems like finding hardware more optimized for very high packet per
> second counts would be a good thing to do. I just have no idea what is
> out there that could meet these goals. I'm unsure if faster CPUs, or
> more CPUs is really the problem, or networking cards, or just plain old
> fashioned tuning.
>
> Any ideas or suggestions would be welcome!
> micah
>
>



More information about the NANOG mailing list