Routers vs. PC's for routing - was list problems?

Christopher E. Brown cbrown at woods.net
Fri May 24 10:23:21 UTC 2002




Though I might lend a comment here.  I have had alot of experience
with PC based routers, starting around 96, and getting majorly into it
around 98 or so.

To give you an idea.  No moving parts except cooling fans.  Main drive
is an IDE style SanDisk flash drive.  System goes through a multistage
boot.

System start, loads initial startup code into boot ramdisk.
System mounts a partition on the flash read-only
System creates soon to be / ramdisk and uncompresses final fs image to it
System copies stored configs from flash to /etc on second ramdisk
System unmounts flash and remounts rootfs to second ramdisk
System frees first ramdisk
System finishes boot

This was of course a totally custom Linux distrib, with a set of
config tools for manipulation of the boot config (The flash stores 2
operational config archives, 2 operational fs images and one recovery
config and fs image.)  The system would automagicly boot the primary
config, on failure boot the secondary, on failure boot the recovery
image.  Boot image and config set selectable at boot via serial
console.  This allowed us to load a make config updates to the primary
config, while saving the working configs to the secondary, and to
handle fs image updates properly (can always drop back to last known
working copy).  Worst case the recovery image can reload from backup
via the network in a matter of seconds.


The base platform was a K6-3 450Mhz, giving us a 64k L1 and 256K L2
cache running at 450Mhz, and a 1M L3 at 100Mhz.  Given 256M SDRAM for
main memory (4 way interleave) and using 64MB for the rootfs with the
distro specificly designed to run in a ram only environ everything
worked well (especially without IDE bus interrupts screwing with
things).

The only time it touched flash was during boot, and when updating or
backing up config or fs images.

We used (and sold) many of these boxes as a 7200 replacement.  A
7206VXR is at best a 300Mhz MIPS box with a 33Mhz PCI bus.  Both the
PC and the Linux box top out at just under 400Mbit over the main bus,
but the Linux box had *alot* of CPU left over to run filters, logging,
multiview BGP and CBQ.

It was nice to have a box capable of BGP, OSPF, RSVP, filtering, CBQ,
IP rewrites and NAT at 300Mbit+ with SSH and serial console access,
costing < 10,000$USD with 2 x DS3 and 4 x 100Mbit-FDX ethernet in mid
1999, considering a 7200 cost 3 times that (with interfaces and
memory), and was pretty weak as far as SSH, CBQ and NAT support went
(As well as having issues with NWAY and FastEtherChannel trunking).

If one is being used at the network core where filtering is not done
there is some fastpath magic that can easily take the box up to about
800Mbit aggregate.  Using multiport ether cards with 4 interfaces per
on there own PCI sub bus it gets fun.  Given the right card and driver
and assuming you group your traffic it gets interesting.  Only the IP
headers cross the main bus, the payloads go direct card to card, if it
is within the same iface group it never touches the main PCI bus.

This was in late 1998.  We also did some work with single and dual CPU
21264 as well as Ultra AXMP+ systems for the 64bit 66mhz PCI bus.  We
were very happy with the performance (1.5 - 2.0 Gbit/sec aggregate
while running full filters and CBQ on a dual 21264 w/ 768 meg mem) but
at the time was a bit high.  These days a dual Athlon MB with 4 64bit
66Mhz PCI slots is < 350$USD...


So, the easy rule?  A 500Mhz *quality* PC booting from flash to ram
can replace a 7206VXR.  Up to quad DS3/Quad 100Mbit ether is fine.
Your overall bandwidth limit is about the same, but at that bandwidth
you can do a hell of alot more work (think stateful filters, CBQ,
IP rewrites or IPSEC), as the limit is the PCI bus your have CPU and
memory bandwidth to burn.


Alot of this was R&D for product sales and ISP operations at a
previous employer, and there are still boxes sitting around handling
(for example) DS3 x 2 + 100Mbit x 4, 3 full views (each DS3 to
seperate provider, 2 x 100Mbit-FDX EtherChannel link to a 7200
peer/backup, and 2 x 2 x 100Mbit-FDX EtherChannel link to a catalyst
2429XL for a server cluster and dialin hardware)  Its 7200 peer dies
now and again due to CPU overload from route flap/etc, never had any
trouble with the LinuxRouter.  Been in place since late 99 or so.

At my current place I end up working with 2 port bandwidth
controllers, and IPSEC VPN boxes.  We have been known to produce a
pretty slick 100Mbit full duplex bandwidth control box, as well as
some neat VPN systems.


These days if I want to do more than an OC3 or 2 we grab a Juniper,
but if you want to do say IPSEC, a dual Athlon 2000 MP+ w/ 1G PC2100
ECC DDR and a Syskonnect 64bit/66Mhz GigE card is ~ 2,000$USD.  It can
do alot of work...


Creating the initial distro, writing the CLI linking all the daemon
config/etc and know what interrupt timers and packet timers to tweak
takes skill.  Just using one is easy.


 --
I route, therefore you are.





More information about the NANOG mailing list