ATM (was Re: too many routes)

Sean M. Doran smd at
Thu Sep 11 22:53:20 UTC 1997

Richard Irving <rirving at> writes:

> Ok. I will bite, although I hate to open my mouth, as my shoe always
> seems to bee-line for it.. ;}


>    I don't think so.... how about the ability to mix
>  voice, MPEG, and IP on the same pipe ?

Um, I do this now with IP.  

Admittedly with parallel circuits (virtual or hard) I
could send such traffic down different pipes to partition
congestion effects, however to do this right I'd really
want to use MPLS/tag switching anyway.

When I make the decision to use MPLS/tag switching I also
have to consider that there is decent queuing available in
modern IP routers (that will effectively become hybrid IP
routers and MPLS/tag switches) and that I can neatly
partition the traffic without using actual or virtual

> Or, how about that with ABR my delay across the ATM
> fabric is reduced when I have more bandwidth open. (POTS
> is low on utilization, during this "theoretical moment
> in time") A couple milliseconds and a few extra Mbs can
> count ;)

You want bounded delay on some traffic profiles that
approach having hard real time requirements.  (Anything
that has actual hard real time requirements has no
business being on a statistically multiplexed network, no
matter what the multiplexing fabric is).  This can be
implemented in routers now, with or without MPLS/tag
switching, although having the latter likely makes
configuration and maintenance easier.

ABR is another attempt to do statistical multiplexing over a
substrate that is not well geared to anything other than
TDM.  It interacts poorly with any protocol that is
developed to run over a statistically-multiplexed network
(e.g. TCP) and there are little demons with respect to the
way RMs are handled that can lead to nasty cases where you
really don't get the bandwidth you ought to.

The problem again is that mixing TCP and other
statmux-smart protocols with ABR introduces two parallel
control loops that have no means of communication other
than the interaction of varying traffic load, varying
delay, and lost data.  This often leads to the correct
design of additive increase/multiplicative decrease
traffic rate response to available bandwidth leading to a
stair-step or vacillation as more bandwidth becomes
available to each control loop, and a rather serious
backing off at the higher level when available bandwidth
is decreased even fairly gently.

Delay across any fabric of any decent size is largely
determined by the speed of light.  Therefore, unless ABR
is deliberately inducing queueing delays, there is no way
your delay can be decreased when you send lots of traffic
unless the ATM people have found a way to accelerate
photons given enough pressure in the queues.

>   Oh, 2 things come to mind, my variability throughout an ATM cloud is
> greatly reduced versus a routing cloud, a cell requires WAY less time to
> cross a switches backplane, versus a  packet through a router. And
> seriuosly less time to determine where to send it...

Um you need to be going very slowly and have huge packets
for the passage through a backplane to have any meaning
compared to the centisecon propagation delays observed
on long distance paths.

I know of no modern router that delays packets for
anything approaching a handful of microseconds on fast interfaces
except in the presence of outbound queues being congested,
where if you're running TCP you really want to induce
delay rather anyway, so that the transmitter will slow

>    Ok. So, maybe Cisco's Flow Switching approaches VBR having a bad hair
> day. (and tuned for SERIOUS tolerance, CDVT=10,000), but certainly not
> traditional routing. 

The analogy between VBR and flow switching confuses me.
Could you explain this a bit?   Actually, maybe you could
explain the rest of the message too, because I think we
have a disconnect in terms of vocabulary. :(


More information about the NANOG mailing list