DoD IP Space

Sabri Berisha sabri at cluecentral.net
Fri Jan 22 21:03:15 UTC 2021


----- On Jan 22, 2021, at 12:28 PM, Izaac izaac at setec.org wrote:

Hi,

> On Wed, Jan 20, 2021 at 02:47:32PM +0100, Cynthia Revström via NANOG wrote:
>> certain large corporations that have run out of RFC1918, etc. space
> 
> At what level of incompetence must an organization operate to squander
> roughly 70,000 /24 networks?

Or, at what level of scale.

Or, a combination of both.

Let me give you an example. This example is not hypothetical.

Acme Inc operates a popular social media site. This requires a lot of
compute power, and storage space. Acme owns multiple datacenters around
the world, and all must be connected.

Acme divides its data centers in "Availability Zones". Each AZ contains
a limited amount of equipment. A typical AZ is made up of multiple pods,
and each pod contains anywhere between 40 and 48 racks. Each rack contains
up to 72 servers. Each server can contain many VMs or containers.

In order to scale, each AZ and pod are designed according to blueprints. This
obviously means that tradeoffs must be made. For example, each rack will be
assigned a /25, since a /26 means that not all 72 servers can have an IP.

Just to accommodate a single IP per server, we already need a /19. Most 
servers will have different NICs for different purposes. For example, it is
not uncommon to have a separate storage network, and a management network.

Now we already need 3 /19s (32 /24s per pod, and we haven't even started to
assign IPs to VMs or containers yet.

Let's start to assign IPs to VMs and containers. Within one of my previous
employers, there were different groups that worked on VMs (cloud), and 
containers (k8s). Both groups had automated scripts to assign IPs, but these
(obviously) did not communicate. Which means that each group had their own
vlan, with their own IRB (or BVI, or VLAN interface, however you want to
name it). On average, each group started with a /22 per tor (later on, 
we limited them to a /24). So now we need 48*2*4=384 /24s per pod extra.

So, with 384+32 = 416 /24s per pod, you are looking at a maximum of 157 pods.

Now, granted, there is a lot of waste in this, hence the change from a /22 to
a /24, with a realization that the cloud and k8s group really needed to work
together to avoid more waste.

I will tell you that this is not at all hypothetical, I have personally
created spreadsheets of every /16 in 10/8 and how they were allocated. It's
amazing how much space was wasted in the early days at said employer, and
how much I was able to reclaim simply by checking if the allocations were
still valid. Hint: when companies split up, a lot of space gets freed up.

This the way that we avoided using DoD IP space to complement 10/8.

But, you were asking how it's possible to run out of 10/8, and here is your
answer :)

TL;DR: a combination of scale and incompetence means you can run out of 10/8
really quick.

Thanks,

Sabri


More information about the NANOG mailing list