NSP ... New Information

Ehud Gavron GAVRON at ACES.COM
Tue Jun 10 15:08:56 UTC 1997


	Well, um, er... it wasn't going to be this long but I got
	started, and then put some math in it (I'm sorry, I know 
	math is hardly operational) and then some history.

	Hit "D" now if your brain is mush.

	E


Phil Howard wrote...

>Suppose TCP/IP had been designed from the beginning with 64-bits of flat
>address space divided 32/32.  We would not have the space crunch at all

Actually I beg to differ.  IP was designed to handle a variety of network
sizes, and to automatically infer the size of the network from the number
of contiguous set bits in the first octet.  You can skip most of the next
two paragraph but read the line in the ***'s...

DECnet, to contrast, was originally designed for 4-bit addressing.  "After
all, Nobody could afford more than 16 machines."  Eventually DEC realized
the error of their ways and upgraded to 8-bit addressing.  "After
all, Nobody could afford more than 256 machines."  But WANs were becoming
popular, and SPAN (discussed elsewhere, precursor to NSI) wanted to do it,
and HEPNET (discussed elsewhere) wanted to do it.

This resulted in DECnet V4, 16 bits of address space broken 6/10 into
"Areas" of "nodes."  Worked great for small private networks, badly for
larger networks.  Large internetworks were impossible as only 64 "areas"
could exist.  *** This shortsighted design spec required workarounds 
("hidden areas", "poor man's routing") which REQUIRED that the USER 
know and specify the ROUTE from END TO END. ***

IP provided a solution where (suprise, surprise) routing was handled at
L3 and was oblivious to the user.  It isn't until now, when most users
are "used to unreliable service" that traceroute has become a popular
end-user tool.  If users expected their cars to be as reliable as the
cheap-ass $20/mo Internet Providers, there would be no warranty business
for GM, but let me jump back off this soapbox, as I digress.

Anyway, IP has the strong point that network sizing is (now) a dynamic
entity, so that a network (set of contiguous address space) is sized to
the needs of the organization.  The whole set of networks is limited to
N bits (32 now, 64 under IPv6), but the masking is not fixed.

I don't take issue with "It's too bad IP wasn't designed 64..." but
rather that "32/32" is the end-all.  I think 2^32 network spaces SOUNDS
enough now, but pre-allocates obscene parts of the address space for
no reason.  Well, ok, to conserve routing entries.  However, when you
consider that we now have ~45K routes, o(45K) is five orders of magnitude
off from o(4E9).  It behooves us not to "make it easier" on the router
by limiting it to 4E9 entries since the problem is already "outside a
known technological solution today."  (Well, with L3 anyway.)

Recall that in 1982 memory was not cheap.  IBM had introduced the PC
the year before with 256K, upgradable to 640K.  Memory was about $500/k
and addressing was 16 bits, minis to 32, Crays to 64.  In the 80's the
NSFNET NSS could originally only handle one ASN announcing a route,
then eventually upgraded to 4...  These were memory and processing
limitations.

A table of 2^32 8-octet entries (4GB) is something that BACK THEN
was inconceivable for memory storage.  Today we consider that it
would be possible to size the table at
2^32 entries of destination, gateway, netlength(mask), flag?
                8 octets     4x8 octets  1 octet       

2^32 x 40 = 1.6E11 Bytes of table space.  If you take the Cisco approach
to store this in multiple ways so you can fast-cache access to the
routes vs a sequential search, it appears to take 5x storage, or
8E11GB.  Add the router code, and we're talking a box with a terrabyte
of RAM.  That's a tad more than today's boxes.

Now contrast this with 64/? variable networks.  
2^64 entries of 40 octets =~ 6E20 Bytes.  

The technological difference between today's high-availability processors
and either of those two goals are so vast that "trying to save on space..."
wins nothing until we know what our real goalpost is.

If we hit IPv6 Implementation Year and Yer Average Router can only handle
2E10Bytes, someone's gonna say "29/35?".  That would be the time to
haggle bits ;)

Well, back to sleep. 

E



>AND there would be no space "handle" for routing policies to lean on to
>screw the little guys.  Tell me what the big boys with small routers would
>do in this case today?  Even the biggest router has no chance with a billion
>routes.  Or would we have been forced to come up with a new and better
>replacement for BGP(4) by now that does dynamic intelligent aggregation
>or something?

>--
>Phil Howard KA9WGN   +-------------------------------------------------------+
>Linux Consultant     |  Linux installation, configuration, administration,   |
>Milepost Services    |  monitoring, maintenance, and diagnostic services.    |
>phil at milepost.com +-------------------------------------------------------+



More information about the NANOG mailing list