High throughput bgp links using gentoo + stipped kernel

Andre Tomt andre-nanog at tomt.net
Sun May 19 14:01:27 UTC 2013


On 18. mai 2013 17:39, Nick Khamis wrote:
> Hello Everyone,
>
> We are running:
>
> Gentoo Server on Dual Core Intel Xeon 3060, 2 Gb Ram
> Ethernet controller: Intel Corporation 82571EB Gigabit Ethernet
> Controller (rev 06)
> Ethernet controller: Intel Corporation 82573E Gigabit Ethernet
> Controller (rev 03)
>
> 2 bgp links from different providers using quagga, iptables etc....
>
> We are transmitting an average of 700Mbps with packet sizes upwards of
> 900-1000 bytes when the traffic graph begins to flatten. We also start
> experiencing some crashes at that point, and not have been able to
> pinpoint that either.
>
> I was hoping to get some feedback on what else we can strip from the
> kernel. If you have a similar setup for a stable platform the .config
> would be great!
>
> Also, what are your thoughts on migrating to OpenBSD and bgpd, not
> sure if there would be a performance increase, but the security would
> be even more stronger?

This is some fairly ancient hardware, so what you can get out if it will 
be limited. Though gige should not be impossible.

The usual tricks are to make sure netfilter is not loaded, especially 
the conntrack/nat based parts as that will inspect every flow for state 
information. Either make sure those parts are compiled out or the 
modules/code never loads.

If you have any iptables/netfilter rules, make sure they are 1) 
stateless 2) properly organized (cant just throw everything into FORWARD 
and expect it to be performant).

You could try setting IRQ affinity so both ports run on the same core, 
however I'm not sure if that will help much as its still the same cache 
and distance to memory. On modern NICS you can do tricks like tie rx of 
port 1 with tx of port 2. Probably not on that generation though.

The 82571EB and 82573E is, while old, PCIe hardware, there should not be 
any PCI bottlenecks, even with you having to bounce off that stone age 
FSB that old CPU has. Not sure well that generation intel NIC silicon 
does linerate easily though.

But really you should get some newerish hardware with on-cpu PCIe and 
memory controllers (and preferably QPI). That architectural jump really 
upped the networking throughput of commodity hardware, probably by 
orders of magnitude (people were doing 40Gbps routing using standard 
Linux 5 years ago).

Curious about vmstat output during saturation, and kernel version too. 
IPv4 routing changed significantly recently and IPv6 routing performance 
also improved somewhat.





More information about the NANOG mailing list