ATM Wide-Area Networks (was: sell shell accounts?)
smd at chops.icp.net
Tue Jul 23 23:05:59 UTC 1996
| Back to real world considerations. Which scales better depends on
| things like average packet size for the router limits and average
| connection duration for the switch limits.
Er, um, this is really highly dependent on the stability
of the routing system. If, in fact, you have very very
stable routes and a fast ATM fabric with not-so-crunchy
packet-forwarders then one could argue that it may make
sense to avoid creating traffic hot-spots in the routers
by flattening the network into a full mesh.
This still leaves two problems, and that is that this may
scale poorly if your packet-forwarder ends up not being
able to select among a large number of VCs at line speeds,
and also routing in a very large cloud tends to scale very
badly. That is, if a router is limited by the number of
forwarding decisions per second per next-hop, then having
many VCs to select from will reduce rather than increase
the useful bandwidth available to flows passing through
it, compared to shuffling traffic through only a very
small set of VCs or physical circuits.
That is to say that the trade off between traffic
funneling and building VCs is tricky, it may be
counter-intuitive, and it may not work in one's favour,
depending on how one's IP router is built.
Moreover, as you add instability into the system, whether
external to the ATM-using network or internal to it, your
ability to converge decreases in proportion to the amount
of meshing in a network. This is an IP effect rather than
a flaw in any particular router or routing-protocol
design, although such flaws will worsen the problem considerably.
So, if one identifies routing instability as a key problem
in scaling the Internet, then goes off and builds a large,
fully-meshed network out of boxes known to have some
difficulties in managing routing instability in very
simple topologies, one has to question his or her sanity.
| As a result, hybrid approaches start to look attractive,
| using routers on the periphery and building fat pipes
| through the switches.
"The switches" has a very broad range of interpretations.
Essentially what one is doing here and in other schemes --
two of which I rather like, btw -- is exchanging a
prefix/mask and possibly other information for a value or
set of values that will be used to make switching
decisions through a set of routers.
One can do this with ATM. One can do this with FR. One
can do this with FDDI. One can do this with ethernet.
One can make up one's own tag-based switching scheme, too,
optimize it for Internet traffic, and maybe learn from
some of the thing that other switching schemes got right,
and avoid the things they got wrong.
| The state of the art of ISP needs (where the barage of tinygrams and
| very short flows is felt full force) is PVC pipes between routers to
| offload the routers a little bit.
Hm. Well, what the state of the art ISP needs is more of
a tag-based scheme, whatever the variety of tag is, but
ideally one that's well-designed for the type of traffic a
state of the art ISP is likely to see. Or at least,
that's the fast silicon school of IP router design.
There are at least two other schools of router design, one
of which has been talked about by Vadim Antonov (use a
hybercube computer and oh, by the way, use some of the
spare CPUs to do things like web caching, news
distribution, accounting, video-on-demand, playing games
and so forth), and the other of which may or may not have
been discussed here by its proponents.
The first school is of interest because it is an obvious
evolutionary step, and has been partially implemented by
people throwing routers around FR switches and ATM
switches and FDDI switches and the like.
The second school is interesting because it has the appeal
of added functionality and relatively easy scaling up to
massive aggregate bandwidths, although one may have to
couple that with many wires between the box in question
and telco transmission MUXes or circuit-termination gear.
The third school is just interesting, 'cause it's weird. :-)
I am not quite sure whether I would count Craig
Partridge's work as being a fourth school, actually,
but what the heck.
| For all the investment made in custom silicon for ATM it
| may turn out that a general purpose processor and a good
| router design (such as the DEC Alpha used in the BBN
| router) will take us to OC12.
Yes, precisely. The drawback of the first school of
making IP go fast is that it takes a while and is
expensive to build fast silicon.
Using readily-available parts to build a router has an
attraction for that reason, but the drawback there is
getting the software right. This is true both of the
second school and in Partridge's design, and it is also a
drawback in any design where there is a temptation to do
caching to squeeze more speed out because of a time lag in
acquiring new parts.
| Hope its been amusing. I'd say the jury is still out on
| this one. :-)
Well, I wouldn't go that far.
I will admit that ATM hasn't been made fully irrelevant
yet, although some people do have that as a goal.
More information about the NANOG