The Making of a Router

Ray Soucy rps at
Thu Dec 26 19:01:01 UTC 2013

You can build using commodity hardware and get pretty good results.

I've had really good luck with Supermicro whitebox hardware, and
Intel-based network cards.  The "Hot Lava Systems" cards have a nice
selection for a decent price if you're looking for SFP and SFP+ cards that
use Intel chipsets.

There might be some benefits in going with something like FreeBSD, but I
find that Linux has a lot more eyeballs on it making it much easier to
develop for, troubleshoot, and support.  There are a few options if you
want to go the Linux route.

Option 1: Roll your own OS.  This takes quite a bit of effort, but if you
have the tallant to do it you can generally get exactly what you want.

Option 2: Use an established distribution.

Vyatta doesn't seem to be doing much with its FOSS release "Vyatta Core"
anymore, but the community has forked the GPL parts into "VyOS".  I've been
watching them pretty closely and helping out where I can; I think the
project is going to win over a lot of people over the next few years.

The biggest point of failure I've experienced with Linux-based routers on
whitebox hardware has been HDD failure.  Other than that, the 100+ units
I've had deployed over the past 3+ years have been pretty much flawless.

Thankfully, they currently run an in-memory OS, so a disk failure only
affects logging.

If you want to build your own OS, I'll shamelessly plug a side project of

RAMBOOT makes use of the Ubuntu Core rootfs, and a modified boot process
(added into initramfs tools, so kernel updates generate the right kernel
automatically).  Essentially, I use a kernel ramdisk instead of an HDD for
the root filesystem and "/" is mounted on "/dev/ram1".

The bootflash can be removed while the system is running as it's only
mounted to save system configuration or update the OS.

I haven't polished it up much, but there is enough there to get going
pretty quickly.

You'll also want to pay attention to the settings you use for the kernel.
 Linux is tuned as a desktop or server, not a router, so there are some
basics you should take care of (like disabling ICMP redirects, increasing
the ARP table size, etc).

I have some examples in:
or (more recent, but includes firewall

Also a note of caution.  I would stick with a longterm release of Linux.
 I've had good experience with 2.6.32, and 3.10.  I'm eager to use some of
the post-3.10 features, though, so I'm anxious for the next longterm branch
to be locked in.

If running a proxy server of any kind, you'll want to adjust
TCP_TIMEWAIT_LEN in the header file and re-compile the kernel, else you'll
run into ephemeral port exhaustion before you touch the limits of the CPU.
 I recommend 15 seconds (the default in Linux is 60).

Routing-engine -wise.  I currently have a large XORP 1.6 deployment because
I have a need for multicast routing (PIM-SM), but XORP is very touchy and
takes quite a bit of operational experience to avoid problems.  Quagga has
much more active development and eyeballs.  BIRD is also very interesting.
 I like the model of BIRD a lot (more of a traditional daemon than trying
to be a Cisco or Juniper clone).  It doesn't seem to be as far along as
Quagga though.

One of the biggest advantages is the low cost of hardware allows you to
maintain spare systems, reducing the time to service restoration in the
event of failure.  Dependability-wise, I feel that whitebox Linux systems
are pretty much at Cisco levels these days, especially if running in-memory.

On Thu, Dec 26, 2013 at 1:07 PM, jim deleskie <deleskie at> wrote:

> I've recently pushed a "large" BSD box to a load of over 300, for more then
> an hour, while under test,  some things slowed a little, but she kept on
> working!
> -jim
> On Thu, Dec 26, 2013 at 1:59 PM, Shawn Wilson < at> wrote:
> > Totally agree that a routing box should be standalone for tons of
> reasons.
> > Even separating network routing and call routing.
> >
> > It used to be that BSD's network stack was much better than Linux's under
> > load. I'm not sure if this is still the case - I've never been put in the
> > situation where the Linux kernel was at its limits. FWIW
> >
> > Jared Mauch <jared at> wrote:
> > >Have to agree on the below. I've seen too many devices be so integrated
> > >they do no task well, and can't be rebooted to troubleshoot due to
> > >everyone using them.
> > >
> > >Jared Mauch
> > >
> > >> On Dec 26, 2013, at 10:55 AM, Andrew D Kirch <trelane at>
> > >wrote:
> > >>
> > >> Don't put all this in one box.
> >
> >
> >

Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network

More information about the NANOG mailing list