Questions about Internet Packet Losses
avg at pluris.com
Tue Jan 14 01:13:17 UTC 1997
Bob Metcalfe <metcalfe at infoworld.com> wrote:
>Hello, and best wishes for what's left of 1997. Now, if you would, ...
Same to you :)
>Is Merit's packet loss data (NetNow) credible?
As always, the packet loss data doesn't always measure loss in the
> Do packet losses in the
>Internet now average between 2% and 4% daily?
The congestion is definitely not distributed evenly. There are
hot spots, and to my knowledge there was no any significant research
on how those hot spots are distributes, what percentage of paths
is affected, etc.
>Is there any evidence that Internet packet losses are
>trending up or down?
Up. On-going backbone upgrades are simply not sufficient to keep up.
>Were Merit's data correct, what would be the impact of 30% packet losses on
>opening up TCP connections?
Close to none.
>On TCP throughput, say through a 28.8Kbps
TCP throughput generally goes down the drain with that kind of packet loss.
>How big a problem is HTTP's opening of so many TCP connections?
HTTP is broken as designed. That "feature" actively destroys the
cooperative congestion control mechanism.
>need to operate differently than it does now when confronted routinely with
>30% packet losses and quarter-second transit delays?
No. TCP is self-clocking and fairly loss resistant. This is not to say
there's no room for improvement, but nothing spectacular. For example,
selective ACKs would help to deal with higher loss rates.
0.25 sec delays is something routine with satellite communications. TCP
works just fine with that.
>What is the proper
>response of an IP-based protocol, like TCP, as packet losses climb? Try
>harder or back off or what?
That's described in 1988 article by Van Jacobson, Mike Karels et al.
The answer is -- exponential backoff is sufficient to keep system
>How robust are various widespread TCP/IP
>implementations in the face of 30% packet loss and quarter-second transit
There's a lot of broken implementations (mostly in messy-dossy world).
The 4.3 Reno-derived TCPs (the majority) are fairly robust.
>Is the Internet's sometimes bogging down due mainly to packet losses or
>busy servers or what, or does the Internet not bog down?
The most annoying problem is not packet loss, it's routing stability.
>What fraction of Internet traffic still goes through public exchange points
>and therefore sees these kinds of packet losses? What fraction of Internet
>traffic originates and terminates within a single ISP?
70% or so packets cross exchange points. That's empirical figure based
on hypothesis that rate of content providers to consumers is about constant
for all backbone ISPs.
There's no difference between public and private exchanges in terms of
dealing with congestion. So mentioning _public_ part is a bit misguided.
>Where is the data on packet losses experienced by traffic that does not go
>through public exchange points?
I guess there's no reliable data. In fact, there's no theory on how to
collect that data. For example, average, variance, and other traditional
statistical instruments are useless when you attempt to measure fractal
traffic. You end up with meaningless results -- for example, interarrival
time variance is infinite; average rates over similar intervals may differ
by orders of magnitude even when nothing changed, etc, etc.
In other words -- if anybody pretends to know what's going on he's a
>If 30% loss impacts are noticeable, what should be done to eliminate the
>losses or reduce their impacts on Web performance and reliability?
a) put more fibers into the ground
b) build better routers (want to invest?)
c) connect those routers to that fiber.
There's no hard physical limit on the capacity of the network
which can be built with available electronics. There are at least
six orders of magnitude of the growth which can be handled just by
throwing money at the problem.
The QoS, tag switching and other today's buzzwords are only marginal
improvements at best. The only real solution always was and will be:
>Are packet losses due mainly to transient queue buffer overflows of user
>traffic or to discards by overburdened routing processors or something else?
Mostly buffer overflows.
>What does Merit mean when they say that some of these losses are
>intentional because of settlement issues?
They may be looking for some libel lawsuits if they would care to
substantiate the claim.
>Are ISPs cooperating
>intelligently in the carriage of Internet traffic, or are ISPs competing
>destructively, to the detriment of them and their customers?
I do not think any of them would kill the hen laying golden eggs.
The reality is that backbone ISPs are starting to turn out new
customers. There's always a lot of noise from those who just don't
get TANSTAAFL, though.
>Any help you can offer on these questions would be appreciated.
You're welcome. Particularly it that would help to bring more reality
into press reports.
More information about the NANOG