Chanukah [was Re: Hezbollah]

Kent W. England kwe at
Tue Sep 16 21:09:46 UTC 1997

At 05:03 PM 14-09-97 -0400, Dorian R. Kim wrote:
>... One of the
>things that needs to be engineered into building and maintaining
>national/international backbones is traffic accounting to an arbitrary
>granularity that paves the way for better traffic engineering and
>bandwidth projections. There are already ample tools to to per-prefix
>matrix of traffic right now. Tying this in with good sales projections
>will alleviate much of the last minute fire fighting.
>This will most likely never be 100% accurate and precise, but there is
>no reason why we can't get a better handle on bandwidth forecasts. (say to
>95% percentile) 


I don't want to throw cold water on the value of planning and foresight,
but in terms of predicting traffic patterns it has never worked on the
Internet. It sounds good and that was the argument that all the mainframe
networkers made to us early Internet networkers -- Why can't you tell me
upfront what your bandwidth requirements are going to be? Don't you know
exactly how many terminals you have and where they are and what application
keystrokes are going to be pressed at any given time? How else can you
guarantee response time in your network? This Internet stuff is stupid.
It'll never work.

Somehow with the way that HTTP/HTML caught fire and Internet-CB (aka
VocalTec and CUSeeMe) took off, I would be loath to think I could project
my backbone needs with any reliability based on *historical* projections.
>Furthermore, with the deployment of WDM and Internet core devices moving
>closer to the transmission gear, if you have access to fiber, getting more
>bandwidth may become as straightforward as using an additional wavelength
>on the ADM that your router's plugged into.
This I like a lot better as a design technique. Throwing more bandwidth at
the problem almost always works (unless the transport protocol is broken).
Like Peter Kline said, Turn up the speed dial upon onset of congestion.
Simple. Effective.

Then again, creating a data architecture for the web (a problem that has
been recognized, but not addressed in the last five years) would eliminate
much of the backbone bandwidth demand. What would happen if -- presto -- a
data architecture for the web showed up one day? A lot of backbone
bandwidth would become surplus and a lot more edge bandwidth would be
needed ASAP. What does that do to historical projections?


More information about the NANOG mailing list