fuzzy subnet aggregation

Joe Maimon jmaimon at jmaimon.com
Mon Oct 28 16:23:48 UTC 2019


At this time I am just trying to get an idea if the whole exercise is 
worth it. Whether the processing time is feasible for 5k, 50k, 100k, 
200k. Whether the results reduce the count measurably at acceptable 
collateral levels.

Because rtbh scaling to 100k is one thing. And from there it could go 
higher. And it works best if it can be spread to as many edge devices as 
possible, including ones with limited tcam, such as customer edge l3 
switches. And to customers/friends who have redundant connections with 
other providers, including broadband, possibly with tiny routers, even 
ddwrt.

50k routes here and there and soon you are talking about real table bloat.

I try to start every project as a script if at all possible. If that 
works even somewhat real code is promising.

Joe

Mark Leonard wrote:
> You could modify a radix tree to include a consolidation function and 
> resulting confidence.  Then walk the nodes of the tree, check to see 
> if the subtree meets the requirements for consolidation, if so, prune 
> and record the confidence.  You would need to re-run the consolidation 
> from the original data every time an individual IP was added/removed 
> from the list as the consolidation function is lossy.
>
> Alternatively, you could do consolidation on the fly losslessly if you 
> had a custom tree walk algorithm.  That's probably the way I would do 
> it.  I'm not a programmer so I assume there are better ways out there.
>
> Your processing time for 5k IPs should be measured in seconds (ie: 
> less than one) rather than minutes on any modern core.  Based on your 
> pseudocode (sort -n | uniq) I get the impression that you're using 
> BASH which isn't ideal for performing this sort of operation at high 
> speed.
>
> On the flip side, I think an extra 100k routes isn't that much unless 
> you're suffering from hardware routing table limitations.  In my world 
> the cost of a false positive match would far outweigh the cost of 
> upgrading hardware.  YMMV.
>
> Do you have a git repo?
>
> On Sun, Oct 27, 2019 at 9:58 PM Joe Maimon <jmaimon at jmaimon.com 
> <mailto:jmaimon at jmaimon.com>> wrote:
>
>     So I went back to the drawing board, and I think I have something
>     that
>     seems to work much better.
>
>     - convert input prefixes to single ip expressed as integer
>     - sort -n | uniq
>     - into a temporary list file
>
>     begin
>
>     read sequentially until maxhosts (or minhosts) or next subnet
>
>     If matched enough single addresses, output subnet (and missing hosts
>     without early loop termination)
>
>     delete all subnet addresses read
>
>     loop
>
>     Total process time on a vm on old hardware, less than 2m for a
>     5500 line
>     input. Now to verify results, positive and negative....
>
>     Results are still raw, but anyone who wishes is welcome to it.
>
>     Joe
>
>     Joe Maimon wrote:
>     > Does anyone have or seen any such tool? I have a script that
>     seems to
>     > work, but its terribly slow.
>     >
>     > Currently I can produce aggregated subnets that can be mising up
>     to a
>     > specified number of individual addresses. Which can be fed back
>     in for
>     > multiple passes.
>     >
>     > Doing RTBH on individual /32 does not scale well, if you are eyeing
>     > collaboration with external lists. I have found likely sources
>     that could
>     > produce another 100k prefixes easily.
>     >
>     > Joe
>     >
>     >
>




More information about the NANOG mailing list