Internet Backbone Index

Daniel W McRobb dwm at ans.net
Sat Jun 28 19:26:30 UTC 1997


> Jack, this is a really large list, with a lot of backbiters and
> pontificators.  My usual policy is to collect responses for a day, and
> then reply to only one or two, rather than trying to answer each message
> individually.  That would have helped avoid 100 messages on this topic.
> 
> > From: Joe  Shaw <jshaw at insync.net>
> > There are enough marketing reports and half witted articles and reports
> > out their to confuse the average consumer.  I'd think that you'd be
> > ashamed to be part of them, instead of providing a service that's closer
> > to providing what you intentionally planned to do.
> >
> Actually, I'm a bit ashamed of the NANOG response.
> 
> First of all, however you all might dislike it, our end users'
> perception of performance _is_ based on web download speeds these days.
> And the users don't distinguish network load as opposed to server load.

Absolutely.  However, even just from this perspective, I think the
sample set of servers was:

  1) too small
  2) not what the typical end user sees
  3) partly naive

1 is mostly obvious, or at least to me.  Even as a casual surfer, I hit
100 or more sites a day (at least that's what my cache says), usually
searching for something (I'm still looking for the perfect search site,
but alas have had little luck on that front ;-)).

2... I don't think many users surf network provider home pages.
Reachability is probably much more important to providers on that front,
and servers located near inter-provider connectivity epicenters for high
reachability from the rest of the world sometimes suffer on throughput
to the end user.  That's a reasonable tradeoff for a WWW site that's
funded out of your own pocket and sees very low hit rates, uses no push
technology or any high traffic generators, etc.

3... one of the assumptions in the summary is highly questionable:

  'Keynote decided to measure a backbone provider's own public web
   server on the assumption that the provider would locate its own
   server in the best-performing hosting location for that provider.'

I think most providers' home pages are not viewed as being in need of
high performance.  This is not just in terms of location; in general, I
think they're given little funding outside of:

  - content
  - reliability

The hit rates are low; they don't draw much attention (except 'til maybe
now, if people really do go about making them better; I'd be a bit
surprised at that, since it doesn't really serve the end users, who 
almost never look at these sites).

> Maybe you _think_ the server choices were poor, but as far as I can
> tell, you don't have enough data to determine that.  As to criticizing
> the methodology, why haven't you (collectively) proposed a better
> methodology for measuring web access?

I would much rather see something closer to what an end user actually
sees.  For example, take nlanr's (or someone else's) squid cache hit
data and use that to drive a weighted measurement of the spots end users
are actually surfing.  Alone, it doesn't address the issues of where the
problems are, but it at least gets much closer to the end user
experience.  I don't know anyone in my immediate vicinity who has looked
at more than 3 (if any) of the WWW pages in the study; they're just not
popular sites, even for those of us who work in the business (who
presumably have more interest these sites for seeing what the competition
is doing ;-)).

Of course, you might want to trim out the obvious top sites like
playboy.com and penthouse.com, unless you want a nice picture of our net
community's demographics by byte (from what I've seen, it looks to be
largely teenage males, but I suspect there's a bias by type of content
;-)).

I'd still be very hesitant to make any judgement about a provider's
performance based on such data, w/o an intimate knowledge of the
topology and where things are bad.  That's reasonably difficult to do
without a reasonably-sized matrix of well-selected measurement points.
As of yet, we don't know how well the keynote measurements adrressed
this issue.

> Boardwatch, from its history, comes from the user download experience.
> One cannot blame them if they do their best to measure what they see.
> One could instead offer to help them to update their next article.
> 
> Second of all, there really are network performance problems.  It
> matters not at all if RSA (a problem I've had this week) has a powerful
> WWW server, when I cannot get to it reliably because WillowSprings and
> SanFrancisco are dropping packets like crazy.

Certainly.  I don't think anyone's really denying that, are they? (I'll
admit to not knowing, having blindly deleted thru most of this thread
;-)).

> So far, only one response has noted the current ongoing efforts at ISI,
> LBL, Merit, NLANR, and others, to develop good network performance
> measurement techniques and metrics.  I urge Boardwatch to help fund
> them, and to regularly publish the results!
> 
> How many of you naysayers have actually participated in and help fund
> the "scientific" studies?  Put your money where your mouth is!

I've put at least a small fraction of my time into work in nlanr and
caida, and I think network performance measurements are a hard nut to
crack, especially from the end user perspective.  Real visibilty into
causes of performance differences is lacking.  My fear of measurements
like the summary I've seen thus far from the keynote measurements is
that consumers can be misled by such studies.  From the summary, I think
such a study is of questionable value and not something I'd want anyone
I'm friendly with to use as a basis for a purchase decision.  Even as an
end user I give it little value from what I've seen thus far; I don't
surf provider WWW sites, I surf elsewhere.  I'd be much more interested
in a weighted measurement of popular sites, driven from squid cache data
or the like.

And those of us who also offer Web-hosting services know (intimately)
that it's not simple to root out network performance issues from WWW
server issues.  Yes, it's very possible, but it requires a vantage point
that end users don't typically have.

I don't really have anything 'bad' to say about the study, really
(depending on how you take the above).  I just don't find the summary to
be of any personal value to me as an end user or as a network engineer.
The summary doesn't answer any of my performance questions.

Daniel
~~~~~~



More information about the NANOG mailing list