Access to the Internic Blocked
Daniel W. McRobb
dwm at ans.net
Mon Aug 26 13:24:06 UTC 1996
> Daniel W. McRobb <dwm at ans.net> wrote:
> > Doing that at 10 kpps is not going to be a solution any time soon.
> >You're kidding, right? 10kpps has been doable (and done) for years.
> >Did you forget a zero or two?
> Hm. The existing boxes which can do 100kpps can't do accounting at that
> speed. Not in the real life.
The NSSes did it at 20kpps per interface. We have big customers w/ T3
connectivity that run accounting on their 7XXX boxes. Sure, it beats up
the Cisco box. But it's doable. Besides, who mentioned IP accounting?
That's not the optimal way to do things (and certainly not how I'd do
> (Where have you seen a 1Mpps box which actually _works_?)
So you did leave out some zeros?
> >The vBNS folks are about to release an OC-3 header sniffer that runs on
> >a Pentium box. Rumor has it that it'll handle OC-12 as well. There's a
> >presentation of it on the USENIX agenda.
> Sniffing and logging are two very different things.
You don't have to log every packet. You log flows or you just increment
net matrix counters and once in a while dump the whole table. It's not
rocket science, and it's not beyond reach. It's been done on the NSSes
and can be done on the Ciscos. Maybe I'll talk about it at the NANOG
> > I would also wish you luck with logging SA/DA pairs at places like
> > .ICP.NET. where source/destination matrix is about 1-2 millon
> > entries long.
> >1-2 million is not much. Even in the NSFNET days, I worked w/
> >5-million-cell net matrices. All it takes is memory and some CPU.
> 1-2 _simultaneoulsy_, not over period of time. The 1-hr matrix
> would be two orders of magnitude bigger.
A typical 1 hour matrix is considerably smaller. Even a core router
who carries 40,000 routes will not see anywhere near 40,000 * 40,000
cells in a one hour period, or even 2 million cells. Not in my
experience. Even the NAP and MAE routers where I've collected this data
have seen net matrices only on the order of (10^3) to (10^5) for a one
The number of cell entries is not equal to the number of routes squared.
It doesn't happen. If you collect this data, you'll find the net matrix
in a reasonable collection period (say 15 minutes) is typically 5-7
orders of magnitude smaller than the routing table squared, even for
routers that are well-connected and very busy.
> Anyway, it does not make any difference, as the box capable of
> logging at some speed N is going to cost about the same as a
> router of the same speed N (or more). I'm not sure logging worth it.
> >We're not sniffing a shared FDDI ring w/ these UNIX boxes. They get
> >data from the routers.
> What kind of routers? NSSes? You can't get that for ciscos,
ip flow-export a.b.c.d zzzz
ip route-cache flow
Where a.b.c.d is a reasonable workstation w/ a decent amount of memory.
Nowehere near as costly as (say) a Cisco 7XXX that's exporting the
flows. The router does the hard part.
The flow-export data still needs some work, but Cisco has been working
on it (I think we're up to version 5 of the flow-export PDU in the stuff
I'm using), and I've been writing code against it. And maybe one day
we'll actually be able to run this on core routers (well, not ANS since
we have a different vendor's box in the core, but others).
The net matrix (and archiving all of it) gives you enough data to do
trend analysis and other things (including searching for bogons from
some time past). For the case of the Cisco stuff, when you need more
detail from the flow-export, you turn a knob in your code on the
workstation. You don't have to do anything on the router. To date I
haven't really had a need for host-level granularity, but it's doable
(the code I've been working on will have a knob for enabling host-level
granularity to find the needles in the haystack when you're trying to
spot cracker traffic from a particular source and/or to a particular
destination). Even if your router is expiring 20,000 flows per second,
it only comes out to about 667 packets per second to the workstation,
which is pretty low even for older workstations with fairly weak CPUs by
modern standards. An Alpha can handle it w/o even breaking a sweat.
More information about the NANOG