nanog discussion of HTTP 1.1

Henrik Frystyk Nielsen frystyk at w3.org
Mon Feb 9 02:50:48 UTC 1998


Hi Marc,

I have not been on this list for long but please allow me to elaborate on
some of the points in this mail.

>> Phil Howard writes:
>> > By loading the images in parallel, with the initial part of the image
files
>> > being a fuzzy approximation, you get to see about where every button is
>> > located, and in many cases you know exactly what it is, and you can click
>> > on them as soon as you know where to go.
>> 
>> By loading the images in parallel over multiple TCP connections, you
>> also totally screw the TCP congestion avoidance mechanisms, and hurt
>> the net as a whole, especially given how prevalent HTTP is these days.
>> Unfortunately, as has been seen here, very few people working with the
>> net these days actually understand the details of things the net
>> depends on, and TCP congestion avoidance is one of them.
>> 
>> HTTP 1.1 allows multiplexing in a single stream, and even (amazingly
>
>Once again, HTTP/1.1 does _not_ allow multiplexing multiple transfers
>simlultaneously in a single TCP connection.  Multiple responses are
>serialized.

To be precise, an HTTP/1.1 client can issue multiple requests on the same
TCP connection without waiting for the responses to previous requests. The
requests as well as the responses do not change order while transferred and
are not interleaved - it is strictly a question of timing. You can find a
more detailed in our paper

	http://www.w3.org/Protocols/HTTP/Performance/Pipeline.html

which you refer to as well as out HTTP performance overview at

	http://www.w3.org/Protocols/HTTP/Performance/

>> enough) ends up working faster in practice than multiple TCP
>> connections.
>
>I have seen nothing supporting that assertion for high latency, medium
>(aka. the Internet on a good day) packet loss connections.  The discussion
>at:
>
>	http://www.w3.org/Protocols/HTTP/Performance/Pipeline.html
>
>shows some interesting information about the wins of persistent
>connections and piplining, however their tests only went as far as
>including a 28.8k local dialup, which does not simulate the "average user" 
>going to the "average web site".

This is not entirely correct - the paper discusses the effect of HTTP/1.1
buffering and pipelining in three network environments:

	WAN between LBL and MIT
	LAN (10 Mbit)
	PPP over 28.8 modem

Our tests show that in all three environments, HTTP/1.1 using a single TCP
connection outperforms HTTP/1.0 using 6 simultaneous connections. In both
the LAN and the WAN case the number of TCP packets used by HTTP/1.1 dropped
to about 1/3 and time spent to about 1/2 in direct comparison between
HTTP/1.1 and HTTP/1.0. The savings in the PPP case are more modest due to
the low bandwidth.

I should say that we used the W3C libwww as the client side HTTP/1.0 and
HTTP/1.1 implementations and Apache and Jigsaw as HTTP/1.1 servers. The
client side pipelining code is available from

	http://www.w3.org/Library/

>  If you are dropping packets and
>congestion contol is coming into play, you may see more impact when using
>one connection that is temporarily stalled than multiple connections, with
>the hope that at least one will be sending at any time.  I am not aware of
>any research to support (or deny, for that matter) this view, however
>AFAIK there is a general lack of published research on the interaction
>between HTTP w/pipelined and persistent connections and the Internet. 

While the absolute time spent in our WAN experiments varied greatly over
the cause of the day (low in the morning, high in the afternoon, ET) the
relative difference between HTTP/1.0 and HTTP/1.1 was fairly constant even
when we experienced high packet loss (some links litterely went down while
we ran the tests).

>As I have already pointed out to Paul, but think it deserves to be
>emphasized because it is not apparent to many, they do _not_ do pipelined
>connections but only persistent connections.  You can not do reliable
>pipelined connections with HTTP/1.0.  The difference between pipelined
>persistent and non-pipelined persistent connections (in the case where
>there are multiple requests to the same server in a row) is one RTT per
>request plus a possible little bit from merging the tail of one response
>with the head of another into one packet.

You can easily fit 4-5 HTTP requests into the same TCP packet and
similarly, HTTP servers can send responses back to back without starting a
new packet. For example, in the case of cache validation responses it is
possible to fit 4-5 responses into the same TCP packet. Together with a lot
less context swaps, the overall result is that servers cool down quite a
bit, see

	http://www.w3.org/Protocols/HTTP/Performance/System/SysCalls.html

I should point out that existing Web applications actually have a reason
for using HTTP/1.0 the way they do - it allows them to get the metadata for
the inlined objects faster and hence they can lay out the page much sooner
- a crucial factor in the browser battle. Pipelining doesn't fully solve
this problem.

>Also worthy of note is that the only widespread client that implements
>HTTP/1.1 is MSIE4, and even it is botched badly, although not as badly as
>4.0b2 was.  (eg. it sometimes sent 1.1 requests but would only accept 1.0
>responses) 

Scott Lawrence maintains a HTTP/1.1 implementor's forum where implementors
can get together and test their HTTP/1.1 implementations on a regular basis:

	http://www.w3.org/Protocols/HTTP/Forum/

If you find bugs or problems in any of the implementations that participate
in the test (listed on the page above) then you should send a mail to the
list <w3c-http at w3.org>, which is described in

	http://www.w3.org/Protocols/#Lists

Thanks,

Henrik


--
Henrik Frystyk Nielsen, <frystyk at w3.org>
World Wide Web Consortium
http://www.w3.org/People/Frystyk



More information about the NANOG mailing list