NANOG

Bilal Chinoy bac at serendip.sdsc.edu
Wed Apr 3 21:30:15 UTC 1996


Actually, router hops are a problem when the packet
times (your P/C) are on the order of bit propagation
times.

So, for 1.5 Mbps and approx. 1.5 Kbit packets,
		P/C is 1 msec.
    for 45 Mbps,
		P/C is 33 usec.

You can see that routers for T1 and below hops introduce 
a (minimum) latency on the order of prop delay.
T3 and beyond should not really be a problem.

So,

Vadim, you are right in the context of the Sprintlink backbone,
and other DS3 and higher backbones. Of course, most large
continental NSP's have DS3 backbones. Also, these networks
are relatively small (in number of hops).

However, a lot of leaf connectivity is not at DS3 and 
the more such hops you have in your path, the higher
your path delay becomes. (A separate discussion is 
whether and how much this additional latency hurts.)

As an aside, the NSFnet T1 backbone introduced 6 msec.
or so of per hop latency, limiting each router to
approx. 150-160 pps. Additional NSS hops really hurt us 
there!

If a router is to switch DS3 at line speeds, it has to
process 2*45 Mbps, or 90 Mbps per adapter. At 1.5 Kbit/packet,
this is approx. 16 usec./packet. You can see why on-card 
embedded systems came into vogue ...

Cheers,

		-- Bilal

		(who kinda remembers that backbone engineer
		 skin thing - 1987-1992.)
				

> 
> 
> Dear Mr. Antonov,
> 
> Thanks for taking time to enter this little spat about Internet collapses,
> the growing importance of NANOG, and my cluelessness.
> 
> You wrote:
> 
> >I was in a backbone engineer's skin for quite a few years, and "hops"
> >per se never were a problem.  In fact, store-and-forward delays are
> >a mere fraction of wire propagation delays -- do a traceroute coast-to-coast,
> >look at delays and calculate how it relates to distance divided by speed of
> >light.  Indeed, you're the first person concerned with the growth of diameter
> >(which is, BTW, logarithmic to size of the network).
> 
> Perhaps I am confusing terms here.  How can it be a fact that
> "store-and-forward delays are a mere fraction of wire propagation delays?"
> I don't think so.  Check me on this:
> 
> Packets travel over wires at large fractions of the speed of light, but
> then sadly at each hop they must be received, checked, routed, and then
> queued for forwarding.  Do I have that right?
> 
> Forget checking, routing, and queueing (ha!), and you get, I think, that
> store and forward delay is roughly proportional to the number of hops times
> packet length divided by circuit speed (N*P/C).
> 
> For 10 hops of a thousand bit packet at Ethernet speed, that would be 1 ms,
> or a couple hundred miles of prop delay.  Check me on this, one of us might
> be off by several orders of magnitude.
> 
> But at 30 hops of thousand byte packets at T1 speeds, that's, what? 4,000
> miles of prop delay.  A mere fraction?
> 
> OK, maybe soon the entire Internet backbone(s) will be ATM at 622Mbps,
> which would certainly knock some of the wind out of N, P, and C.  Soon?
> 
> But of course, getting back to 1996, N*P/C doesn't count checking, routing,
> and queueing -- queueing gets to be a major multiple with loading.  Oh, I
> forgot retransmission delays too, at each hop.  And I forgot the increasing
> complications of route propagation as hops increase...
> 
> If I am, as you say, the first person to be concerned with the growth of
> Internet diameter, which I doubt, then I deserve a medal.  Or is my
> arithmetic wrong?  Ease my cluelessness.
> 
> /Bob Metcalfe, InfoWorld
> 
> At 6:44 PM 4/2/96, Vadim Antonov wrote:
> >Received: by ccmail from lserver.infoworld.com
> >>From avg at postman.ncube.com
> >X-Envelope-From: avg at postman.ncube.com
> >Received: from postman.ncube.com by lserver.infoworld.com with smtp
> >    (Smail3.1.29.1 #12) id m0u4IvH-000wq4C; Tue, 2 Apr 96 19:07 PST
> >Received: from butler.ncube.com by postman.ncube.com (4.1/SMI-4.1)
> >    id AA19923; Tue, 2 Apr 96 18:42:20 PST
> >Received: from skynet.ncube.com by butler.ncube.com (5.0/SMI-SVR4)
> >    id AA02534; Tue, 2 Apr 1996 18:40:46 +0800
> >Date: Tue, 2 Apr 1996 18:40:46 +0800
> >From: avg at postman.ncube.com (Vadim Antonov)
> >Message-Id: <9604030240.AA02534 at butler.ncube.com>
> >To: bob_metcalfe at infoworld.com, jerry at fc.net
> >Subject: RE: NANOG
> >Cc: letters at infoworld.com, nanog at merit.edu
> >Content-Length: 2913
> >
> >Bob Metcalfe wrote:
> >
> >>Note, I have never predicted "the death of the Internet," only catastrophic
> >>collapse(s) during 1996, which is "a good calibration" of the rest of your
> >>objections (below).
> >
> >One does not need to be Nostradamus to predict that s*t happens.
> >It happened in the past, many times, too.  Like when me and Sean
> >installed a just-baked SSE into a DC box and it looked fine but
> >screwed nearly all connectivity to Europe for few hours when we were
> >trying to figure out what was going on.  Or when FIX-E<->ICM-DC
> >Bell Atlantic's DS-3 was flapping like mad when moon was
> >in the wrong phase and BA did nothing to fix it for months.  Or when
> >some sequence of 1s and 0s was triggering some bulls*t alarms in
> >Sprint fiber network so causing shutdowns on the entire OC-24 trunk.
> >Or when a bogus static route in a Sprint's box was causing ANS's
> >version of gated to go banana and drop BGP sessions.  Or many many
> >more occasions when "Bysantine-mode failure" becomes ugly reality in
> >the middle of the night so causing more than few people to be dragged
> >out of beds.
> >
> >As long as Internet technology is freaking bleeding edge and operators
> >are in the "code of the day" club catastrophes are bound to happen.
> >
> >>Jerry, Jerry, Jerry, the problem is not that the Internet's chief 100
> >>engineers, whoever they are, fail to report their problems to me, it's that
> >>they (you?) fail to report them to anybody, including to each other, which
> >>is half our problem.
> >
> >That is simply not true.  The backbone engineering society is tightly
> >knit and quite often backbone engineers are simply personal friends.
> >I certainly never had a problem with people refusing to fix problems
> >within their domains (well, PSI's TWD is not an operational problem).
> >The organization-level corrdination is often broken at operators level,
> >but that is merely a function of severe shortage of qualified personnel
> >and inadequate compensation for the high-stress job.
> >
> >>Settlements, "wrong on the face?"  Or are you just too busy busy busy
> >>defensive to argue?
> >
> >Before you talk of settlements answer the simple question --
> >a packet travelled from provider A to provider B.  Who should pay
> >to whom?  Then, please, stop perpetuating nonsense.
> >
> >>So, you say, increasing Internet diameters (hops) are only of concern to
> >>whiners like me?  There are no whiners LIKE me.  I am THE whiner.  And hops
> >>ARE a first class problem, Jerry, or are you clueless about how
> >>store-and-forward packet switching actually really works?
> >
> >I was in a backbone engineer's skin for quite a few years, and "hops"
> >per se never were a problem.  In fact, store-and-forward delays are
> >a mere fraction of wire propagation delays -- do a traceroute coast-to-coast,
> >look at delays and calculate how it relates to distance divided by speed of
> >light.
> >Indeed, you're the first person concerned with the growth of diameter
> >(which is, BTW, logarithmic to size of the network).
> >
> >--vadim
> 
> 
> ______________________________________________
> ______________________________________________
> 
> Dr. Robert M. ("Bob") Metcalfe
> Executive Correspondent, InfoWorld and
> VP Technology, International Data Group
> 
> Internet Messages: bob_metcalfe at infoworld.com
> Voice Messages: 617-534-1215
> 
> Conference Chairman for
> ACM97: The Next 50 Years of Computing
> San Jose Convention Center
> March 1-5, 1997
> ______________________________________________
> ______________________________________________
> 
> 
> 
> 
> 




More information about the NANOG mailing list