Keynote/Boardwatch Results

Deepak Jain deepak at jain.com
Wed Jul 9 20:03:44 UTC 1997


I guess along these lines the following question came up.

If this was supposed to be an end-to-end user performance idea, why were
backbone provider sites being hit instead of sites more typical end users
would be using? Say, a major search engine? It smacks me that the
article's results were slanted to make a comment on backbones, not
user-viewed performance, where the test has been argued to be measuring
the latter. 

DISCLOSURE: Then again, we do a god awful amount of web traffic and like
people looking at "real world" performance over any particular path
through any particular cloud. 

-Deepak.

On Wed, 9 Jul 1997, Craig A. Huegen wrote:

> On Wed, 9 Jul 1997, Jack Rickard wrote:
> 
> Before you start with your claims, Jack, that I have something to lose,
> you should realize that I am an independent consultant, and work for none
> of the people in the study.
> 
> ==>what looks LOGICAL to me, not anything I know or have tested.  I am rather
> ==>convinced that moving a single web site to another location, putting it on
> ==>larger iron, or using different OS will have very minor impact on the
> ==>numbers.
> 
> You may be convinced because of the theory you've developed to match the
> flawed methodology with which the tests were performed. However, I have
> some tests that I did to measure the connect delays on sites.
> 
> Here's the average for 200 web sites that were given to me when I polled
> some people for their favorite sites (duplicates excluded):
> 
> (because in a lot of cases we're talking milliseconds, percentage is not
> really fine enough, but this was to satisfy personal curiosity)
> 
> SYN -> SYN/ACK time (actual connection)			22%
> Web browser says "Contacting www.website.com..."
> 
> SYN/ACK -> first data (web server work--		78%
> getting material, processing material)
> Web browser says "www.website.com contacted, waiting for response"
> 
> Note that this didn't include different types of content.  But it *did*
> truly measure one thing--that the delay caused by web servers is
> considerably higher than that of "network performance" (or actual connect
> time).
> 
> And, the biggest beef is that you claimed Boardwatch's test was BACKBONE
> NETWORK performance, not end-to-end user-perception performance.  You
> threw in about 20 extra variables that cloud exactly what you were
> measureing.  Not to mention completely misrepresenting what you actually
> measured.
> 
> /cah
> 
> 



More information about the NANOG mailing list