Policy Statement on Address Space Allocations
smd at cesium.clock.org
Sat Jan 27 21:34:08 UTC 1996
This is a very interesting question involving
economics of the Internet, which are still very fuzzy.
I think it would make an interesting experiment.
Certainly there has been lots of talk about putting something
big like wuarchive.wustl.edu or ftp.uu.net into something obscure
to see what breaks. There are existence proofs of sorts that
popular sites can have an effect: ftp.uu.net's insistence on
IN-ADDR.ARPA mappings is the one that pops into my mind immediately.
Some experiments, like exp39, involved moving root nameservers
into a subnet of 39/8.
However, nobody has yet said, "I am Playbeing, I have content
people want to get to, I can leak a /32 if I want to, and everyone
will have to carry it or explain to their users why they can't
get their sex-on-demand".
Perhaps that's because people are afraid that there is so
much other interesting content out there that disconnectivity
of any sort from their customer-base would be fatal, in that
users would find alternatives fairly quickly.
"Click here." <click> (time passes) "Error." This happens
so frequently and for so many reasons that friends of mine
who are somewhat more typical dialup users would simply
ignore the link or site and move on to something else.[*]
There certainly is alot of competition on the content front
Can a big web site afford to have people thinking, "stoopid
Netscape, their server is down"? Or would they be crafty
and say, "all our information has now moved to _a new site_;
if you get a time-out error then you should _send email to
your Internet Service Provider_ telling them so"?
[*] I'm different. I want to know why it doesn't work. I
gather that doing traceroutes and looking at routers
is atypical behaviour.
More information about the NANOG