legacy /8

Mark Smith nanog at 85d5b20a518b8f6864949bd940457dc124746ddc.nosense.org
Fri Apr 2 21:42:20 CDT 2010

On Fri, 02 Apr 2010 15:38:26 -0700
Andrew Gray <3356 at blargh.com> wrote:

> Jeroen van Aart writes: 
> > Cutler James R wrote:
> >> I also just got a fresh box of popcorn.  I will sit by and wait
> > 
> > I honestly am not trying to be a troll. It's just everytime I glance over 
> > the IANA IPv4 Address Space Registry I feel rather annoyed about all those 
> > /8s that were assigned back in the day without apparently realising we 
> > might run out. 
> > 
> > It was explained to me that many companies with /8s use it for their 
> > internal network and migrating to 10/8 instead is a major pain.
> You know, I've felt the same irritation before, but one thing I am wondering 
> and perhaps some folks around here have been around long enough to know - 
> what was the original thinking behind doing those /8s? 
> I understand that they were A classes and assigned to large companies, etc. 
> but was it just not believed there would be more than 126(-ish) of these 
> entities at the time?   Or was it thought we would move on to larger address 
> space before we did?  Or was it that things were just more free-flowing back 
> in the day?  Why were A classes even created?  RFC 791 at least doesn't seem 
> to provide much insight as to the 'whys'. 

That's because RFC791 is a long way from the original design
assumptions of the Internet Protocols.

"A Protocol for Packet Network Intercommunication", Vinton G. Cerf and
Robert E. Kahn, 1974, says -

"The choice for network identification (8 bits) allows up to 256
distinct networks. This size seems sufficient for the foreseeable

That view seems to have persisted up until at least RFC761, January
1980, which still specified the single 8 bit network, 24 bit node
address format. RFC791, September 1981, introduces classes. So
somewhere within that period it was recognised that 256 networks wasn't
going to be enough. I'm not sure why the 32 bit address size was
persisted with at that point - maybe it was because there would be
significant performance loss in handling addresses greater than what
was probably the most common host word size at the time.

If you start looking into the history of IPv4 addressing, and arguably
why it is so hard to understand and teach compared to other
protocols such as Novell's IPX, Appletalk etc., everything that has been
added to allow increasing the use of IP (classes, subnets, classless)
while avoiding increasing the address size past 32 bits is a series of
very neat hacks. IPv4 is a 1970s protocol that has had to cope with
dramatic and unforeseen success. It's not a state of the art protocol
any more, and shouldn't be used as an example of how things should be
done today (As a minimum, I think later protocols like Novell's IPX and
Appletalk are far better candidates). It is, however, a testament to how
successfully something can be hacked over time to continue to work far,
far beyond it's original design parameters and assumptions.

(IMO, if you want to understand the design philosophies of IPv6 you're
better off studying IPX and Appletalk than using your IPv4 knowledge.
I think IPv6 is a much closer relative to those protocols than it is to
IPv4. For example, router anycast addresses was implemented and used in

Possibly Vint Cerf might be willing to clear up some of these questions
about the origins of IPv4 addressing.


More information about the NANOG mailing list