NIST IPv6 document

Jeff Wheeler jsw at inconcepts.biz
Thu Jan 6 02:45:12 UTC 2011


On Wed, Jan 5, 2011 at 8:57 PM, Joe Greco <jgreco at ns.sol.net> wrote:
>> > This is a much smaller issue with IPv4 ARP, because routers generally
>> > have very generous hardware ARP tables in comparison to the typical
>> > size of an IPv4 subnet.
>>
>> no it isn't, if you've ever had your juniper router become unavailable
>> because the arp policer caused it to start ignoring updates, or seen
>> systems become unavailable due to an arp storm you'd know that you can
>> abuse arp on a rather small subnet.
>
> It may also be worth noting that "typical size of an IPv4 subnet" is
> a bit of a red herring; a v4 router that's responsible for /16 of
> directly attached /24's is still able to run into some serious issues.

It is uncommon for publicly-addressed LANs to be this large.  The
reason is simple: relatively few sites still have such an excess of
IPv4 addresses that they can use them in such a sparsely-populated
manner.  Those that do have had twenty years of operational experience
with generation after generation of hardware and software, and they
have had every opportunity to fully understand the problem (or
redesign the relevant portion of their network.)

In addition, there is not (any longer) a "standard," and a group of
mindless zealots, telling the world that at /16 on your LAN is the
only right way to do it.  This is, in fact, the case with IPv6
deployments, and will drive what customers demand.

To understand the problem, you must first realize that myopic
standards-bodies have created it, and either the standards must
change, operators must explain to their customers why they are not
following the standards, or equipment vendors must provide additional
knobs to provide a mitigation mechanism that is an acceptable
compromise.  Do the advantages of sparse subnets out-weigh the known
security detriments, even if good compromise-mechanisms are provided
by equipment vendors?

"Security by obscurity" is an oft-touted advantage of IPv6 sparse
subnets.  We all know that anyone with a paypal account can buy a list
of a few hundred million email addresses for next to nothing.  How
long until that is the case with lists of recently-active IPv6 hosts?
What portion of attack vectors really depend on scanning hosts that
aren't easily found in the DNS, as opposed to vectors depending on a
browser click, email attachment, or by simply hammering away at
"www.*.com" with common PHP script vulnerabilities?

How many people think that massively-sparse-subnets are going to save
them money?  Where will these cost-efficiencies come from?  Why can't
you gain that advantage by provisioning, say, 10 times as large a
subnet as you think you need, instead of seventy-quadrillion times as
large?  Is anyone really going to put their Windows Updates off and
save money because they are comfortable that their hosts can't be
found by random scanning?  Is stateless auto-configuration that big a
win vs DHCP?

Yes, I should have participated in the process in the 1990s.  However,
just because the bed is already made doesn't mean I am willing to lay
my customers in it.  These problems can still be fixed before IPv6 is
ubiquitous and mission-critical.  The easiest fix is to reset the /64
mentality which standards-zealots are clinging to.

-- 
Jeff S Wheeler <jsw at inconcepts.biz>
Sr Network Operator  /  Innovative Network Concepts




More information about the NANOG mailing list