ATM (was Re: too many routes)

Richard Irving rirving at onecall.net
Fri Sep 12 01:37:16 UTC 1997


> >    I don't think so.... how about the ability to mix
> >  voice, MPEG, and IP on the same pipe ?
> 
> Um, I do this now with IP.

  Do you? When I mean voice, I mean true POTS. You present dialtone over
IP ?
(It could be done, btw), but POTS over ATM can exit right into the
DEXX's.
(DEXX = Big telco switch) Lets see you allocate an ESF B8ZS Clear
Channel T1 over IP....
 
> Admittedly with parallel circuits (virtual or hard) I
> could send such traffic down different pipes to partition
> congestion effects, however to do this right I'd really
> want to use MPLS/tag switching anyway.

  Ahhh, tag switching, I am on that particular holy grail as well.....
How many parallel paths have you ran on layer 3 ? Ever watched the
variability ?
( * shiver * ) Now, tell me parallel paths on IP are smooth with todays
technology!
Audio sounds great with lots of variability ..... not. However,
Stratacom can Bond 3 DS3's into
1 OC3, and you would never know the difference.

> 
> When I make the decision to use MPLS/tag switching I also
> have to consider that there is decent queuing available in
> modern IP routers (that will effectively become hybrid IP
> routers and MPLS/tag switches) and that I can neatly
> partition the traffic without using actual or virtual
> circuits.

   Hold it. No actual or virtual circuits... Not even SVC's ? ;)
OK. So there is a new name for the flow paths that the TAG switches
allocate,
what, pray tell, is the new name for these SVC's ?

> You want bounded delay on some traffic profiles that
> approach having hard real time requirements.  (Anything
> that has actual hard real time requirements has no
> business being on a statistically multiplexed network, no
> matter what the multiplexing fabric is). 

  Such as voice?  Why do you think SDM was created in the first place?
Or do you mean like a military application, 2ms to respond to a nuke....
That is when channel priorities come into play. 


> This can be
> implemented in routers now, with or without MPLS/tag
> switching, although having the latter likely makes
> configuration and maintenance easier.
> 
 and troubleshooting infinitely harder ;)
   
   CBR in a "worse case scenario" ATM net, IS TDM, not SDM. NO variance
against TAT allowed.
You might as well say "Real Time" has no business on clear channel T1's.

TDM = Time Division Multiplexing
SDM = Statistical Division Multiplexing.
TAT =  Theoretical Arrival Time. (sort of an ATM-cells time slot, like
in TDM) 

> and there are little demons with respect to the
> way RMs are handled that can lead to nasty cases where you
> really don't get the bandwidth you ought to.

   Their are LOTS of demons hanging in ATM, and IP, and Multicast, and
....
However, I have NEVER failed to get the bandwith "promised" in our nets.
(Knock on wood) However, I have tried to configure for more than was
there, 
and it told me to recalculate, and try again .... And, in some cases
when running 
BEYOND the SCR, I lost the extra BW, and received FECN's ...... Slowing
up the Ip.
But, doesn't that same thing happen when you over-run the receiving
router ?????

SCR = Sustained Cell Rate
BW = Bandwidth
FECN = Forward Explicit Congestion Notification
BECN = Backwards Explicit Congestion Notification

> 
> The problem again is that mixing TCP and other
> statmux-smart protocols with ABR introduces two parallel
> control loops that have no means of communication other
> than the interaction of varying traffic load, varying
> delay, and lost data.  

   Ahhh.. We await the completion, and proper interaction of RM, ILMI,
and OAM.
These will, (and in some cases already DO), provide that information
back to the router/tag switch.
Now do they use it well ?????
That is a different story....

RM = Remote Management
ILMI = a link management interface for ATM
OAM = Operation / Administration Management Cells.


> 
> Delay across any fabric of any decent size is largely
> determined by the speed of light.

   Where in the world does this come from in the industry.
Maybe I am wrong, but Guys, do the math. The typical run across the
North American Continent
 is timed at about 70ms. This is NOT being limited by the speed of
light.

 Light can travel around the world 8 times in 1 second. This means it
can travel
once around the world (full trip) in ~ 120 ms. Milliseconds, not
micro....
So, why does one trip across North america take 70ms... 

186,000 Miles a second =  1 Mile in 5.38 ^-6 seconds (1 Mile = .00000538
seconds)
Now, the North American Continent is about a 4000 mile trip .... This is
a VERY ROUGH estimate.

4000 x .00000538 = .020 of a second. or 20 ms. not 70ms. Guess where the
rest comes from. 
Hint, it is not the speed of light. Time is incurred encoding, decoding,
and routing.

 BTW this (70ms median across the US) comes from a predominantly ATM
network. Actually, I 
am quoting Pac-Bell.

> Therefore, unless ABR
> is deliberately inducing queueing delays, there is no way
> your delay can be decreased when you send lots of traffic
> unless the ATM people have found a way to accelerate
> photons given enough pressure in the queues.
> 
   More available bandwidth = quicker transmission.

Ie: at 1000kb/s available, how long does it take to transmit 1000kb ? 1
second.
Now, at 2000kb/s available, how long does it take ?  1/2 second.
What were you saying ? 

PS. ABR CAN induce que delays, and often will (and in comes QOS.)
IF the traffic is flagged as delay tolerant, i.e. ABR by
definition......

> >   Oh, 2 things come to mind, my variability throughout an ATM cloud is
> > greatly reduced versus a routing cloud, a cell requires WAY less time to
> > cross a switches backplane, versus a  packet through a router. And
> > seriuosly less time to determine where to send it...
> 
> Um you need to be going very slowly and have huge packets
> for the passage through a backplane to have any meaning
> compared to the centisecon propagation delays observed
> on long distance paths.
> 
   Why do you think you have "centi"-second delays in the first place.
 
   I would check yours, but I find time for a packet to cross a router
backplane to be < 1ms, route
determination in a traditional router can take up to 20 ms (or more),
and slightly less than a 1 ms,
 if it is in cache. When I said cross a backplane, I meant "From
hardware ingress to egress", ie to be delivered.

This delay is incurred for every packet, in TRADITIONAL routers !

 It is not so much the path across the backplane, as it is the time to
ascertain the destination path. In switches, the route is determined
ONCE for an entire flow. From there on out, it is microseconds. Let me
give
you an example....... you.


> traceroute www.clock.org
traceroute to cesium.clock.org (140.174.97.8), 30 hops max, 40 byte 
packets
 1  OCCIndy-0C3-Ether-OCC.my.net (-.7.18.3)  4 ms  10 ms  10 ms
 2  core0-a0-14-ds3.chi1.mytransit.net (-.227.0.173)  16 ms  9 ms  10 ms
 3  core0-a3-6.sjc.mytransit.net (-.112.247.145)  59 ms  58 ms  58 ms
 4  mae-west.yourtransit.net (-.32.136.36)  60 ms  61 ms  60 ms
 5  core1-hssi2-0.san-francisco.yourtransit.net (-.174.60.1)  75 ms  71
ms  76 
ms

>>>>>>>>>>>>>>>>>>>>>>>>>

 6  core2-fddi3-0.san-francisco.yourtransit.net (-.174.56.2)  567 ms 
154 ms  
292 ms
  
>>>>>>>>>>>>>>   Tell me this is a speed of light issue. 
>>>>>>>>>>>>>>   From the FDDI to the HSSI on the same router.

 7  gw-t1.toad.com (-.174.202.2)  108 ms  85 ms  83 ms
 8  toad-wave-eth.toad.com (-.174.2.184)  79 ms  82 ms  74 ms
 9  zen-wave.toad.com (-.14.61.19)  84 ms  99 ms  75 ms
10  cesium.clock.org (140.174.97.8)  76 ms  83 ms  80 ms
cerebus.my.net> ping www.clock.org

PING cesium.clock.org (140.174.97.8): 56 data bytes
64 bytes from 140.174.97.8: icmp_seq=0 ttl=243 time=93 ms
64 bytes from 140.174.97.8: icmp_seq=1 ttl=243 time=78 ms
64 bytes from 140.174.97.8: icmp_seq=2 ttl=243 time=79 ms
64 bytes from 140.174.97.8: icmp_seq=3 ttl=243 time=131 ms
64 bytes from 140.174.97.8: icmp_seq=4 ttl=243 time=78 ms
64 bytes from 140.174.97.8: icmp_seq=5 ttl=243 time=81 ms
64 bytes from 140.174.97.8: icmp_seq=6 ttl=243 time=75 ms
64 bytes from 140.174.97.8: icmp_seq=7 ttl=243 time=93 ms

Nice and stable, huh. If this path were ATM switched (Dorian, I will
respond to you in another post)
it would have settled to a stable latency.

> The analogy between VBR and flow switching confuses me.
> Could you explain this a bit?   Actually, maybe you could
> explain the rest of the message too, because I think we
> have a disconnect in terms of vocabulary. :(

  Flow switching does a route determination once per flow, after that
the packets are switched down a predetermined path "The Flow". Hence the
term "flow switching". This reduces the variability of the entire flow.
Great for Voice over IP, etc. However, I should note that the initial
variability to ascertain the flow 
is increased. But, not by as much as is being incurred by routing over
the course of the entire flow.

  However, I should also point out that much of your argument is based
in TCP.
Most multimedia  (Voice/Audio/Video) content does not focus on TCP, but
UDP/Multicast.
What does your slow start algorithm get you then ? 


> 
>         Sean.

 PS MAC Layer Switching, and ATM switching are apples and oranges.
Although, one could be used to do the other. 
(Told you Dorian)



More information about the NANOG mailing list