Is anyone actually USING IP QoS?

Andrew Odlyzko amo at research.att.com
Wed May 19 13:57:15 UTC 1999


Before throwing in my two cents' worth of comments, let me state that
I used to believe in the need for QoS and usage-sensitive pricing
(which go hand in hand).  However, extensive studies of the economics
of data networks and usage patterns have changed my mind, at least
when it comes to the backbones of the Internet.

The exchanges of Hank Nussbacher and Steve Riley have brought up the
key issue that prices of transmission capacity play.  However, some
crucial issues have not been discussed.  I expect that (i) prices will
decrease, and that (ii) networks will be run at low utilization
levels, making QoS unnecessary.  However, (ii) does not follow from
(i) directly.  After all, how do we know that all those big new pipes
won't get filled as soon as they are put into service?  It has been
pretty solidly established (see [1], for example) that on average
people spend a constant amount of time traveling.  You build a
freeway, and now instead of driving 30 minutes over 10 miles of
congested city streets, people decide they will buy a bigger house
with more greenery in the suburbs and drive 30 minutes over 25 miles
of freeway.  In the last 30 years, the average commute time in the
U.S. has not changed appreciably, but the distance traveled has gone
up.  Complaints about road congestion have not gotten less, though, as
far as I can tell.  It is not a priori inconceivable that this same
phenomenon will not operate on the Internet.

My argument for why (ii) will follow from (i) is based on studies of
what people actually do.  (The data and arguments are presented in
detail in [2].)  In particular, on expensive trans-oceanic links, even
corporate private lines are congested.  On the other hand, in the
U.S., private lines are in most cases run at low fractions of their
capacity.  University connections to the Internet vary all over the
lot in their utilization in the U.S., while in other countries they
tend to be congested.  What that suggests is that what people value is
low transaction latency ("I want that Web page on my screen NOW," or
"I want my database query to be processed NOW"), and not lots of bits.
When prices are high, they put up with lousy quality, but when prices
are lower, at the level of U.S. domestic prices, they opt for the
quality they really want.  That is what makes me think that the
Internet is not like freeways, and that indeed we will have uniformly
high quality IP transport without most of the QoS measures that are
being developed.

The Internet evolving to avoid QoS should not be too surprising.  The
computer industry was cited in these discussions by Steve Riley and
others.  It is an instructive example.  There were all sorts of
prioritization and pricing schemes for mainframes in computer centers.
(One of the most interesting ones was due to two of my former
colleagues [3].  It not only had some nice theoretical optimality
properties, but was actually implemented at the Murray Hill Computer
Center of AT&T Bell Labs, and worked very well to produce essentially
full utilization of computing resources.)  However, those guys aren't
famous.  The reason is that we don't have computer centers any more.
They got displaced by PCs, which are run at ludicrously low fractions
of their capacity.  This "waste" is tolerated because people want the
peak power of their 500 MHz Pentium III to bring up their PowerPoint
presentations, etc.  The key issue is the utility that people get from
a resource (CPU power, transmission capacity, etc.)  and their
willingness to pay for it.  The evidence from data networks is that it
won't make much of a decrease in transmission prices to produce lower
utilization rates and better quality.

Another argument I make against QoS is that it pretty much requires
usage-sensitive pricing.  However, that is something that people
dislike, and are willing to pay quite a bit to avoid.  (See [4] for
discussion, models, and references.)

Finally, while I am skeptical of QoS, I should emphasize that I do see
important roles for some forms of it in certain situations.  One is in
access networks, especially in the wireless area.  It appears that
there will continue to be a huge mismatch between the bandwidth
available on fiber and over the air.  Hence wireless bandwidth will be
a (relatively) scare resource, and so it probably makes sense to
ration it through QoS measures.  The other role that I see for QoS is
in the core of the network, but in forms that do not require any
involvement on the part of the people at the edges of the network
(both end users and administrators).  Nothing is ever free, and while
prices of transmission capacity are likely to plunge, total spending
is likely to go up.  That is what happened with computers, printers,
etc.  Thus there will still be an incentive to economize, but it will
have to be done in ways that do not burden too many people.  (Hence
RED should be OK, since its operations are invisible to end users,
but asking those folks to prioritize their packets is not likely to
fly.)  Again, we see that in other areas.  Most folks could save at
least half of their disk space by compressing with any of the widely
available packages, but hardly any do.  On the other hand, we do have
many highly trained experts working on improvements to MPEG
compression algorithms.


References:

[1] A. Schafer and D. Victor, The past and future of global mobility,
Scientific American, Oct. 1997.  Available at
<http://www.sciam.com/1097issue/1097schafer.html>.

[2] The economics of the Internet:  Utility, utilization, pricing, and
Quality of Service, A. M. Odlyzko.  Available at
http://www.research.att.com/~amo/doc/networks.html.

[3] W. A. Gale and R. Koenker, Pricing interactive computer services,
Computer Journal, vol.  27, no.  1 (1984), pp.  8-17.

[4] Fixed fee versus unit pricing for information goods:  competition,
equilibria, and price wars, P. C. Fishburn, A. M. Odlyzko, and R. C.
Siders, First Monday 2(7) (July 1997), http://www.firstmonday.dk/.
Also available at http://www.research.att.com/~amo/doc/eworld.html.



Addendum:  Current high prices are already less of a problem than many
folks claim.  Hank Nussbacher cites as the extreme example of T3
pricing the $400K/month that a Tokyo-LA link costs.  Now a T3 running
at full capacity will transmit just about 15 TB (terabytes) in each
direction per month.  Let's suppose that we run the LA to Tokyo
direction at 53% of capacity, so we will transport 8 TB. The quality
won't be the greatest, but many carriers tolerate even higher
utilizations on such high-cost links.  That will give us a cost
(ignoring traffic in the other direction) of $0.05 per MB. Now in the
U.S., with the flat-rate modem access to the Internet at $20/month
(and no local call fees, which constrain usage in other countries),
the average customer downloads about 60 MB per month.  (One way to
derive that type of estimate is to take the average 55 minutes per day
that AOL folks have been clocking recently, according to an AOL press
release, and factor in an average data transfer rate of about 5 Kbps,
about 20% of the maximal 28.8 modem rate.)  Hence even if the Tokyo
residential modem customers behaved like AOL ones here in the U.S.,
and all their Web surfing was in the U.S., the cost of transporting
their traffic across the Pacific would still be only $3/month.


************************************************************************
Andrew Odlyzko                                      amo at research.att.com
AT&T Labs - Research                                voice:  973-360-8410 
http://www.research.att.com/~amo                    fax:    973-360-8178
************************************************************************





More information about the NANOG mailing list