MTU of the Internet?

Marc Slemko marcs at znep.com
Sun Feb 8 06:52:02 UTC 1998


On Sat, 7 Feb 1998, Paul A Vixie wrote:

> > I'm aware of the existance of a protocol for HTTP to persistent connections
> > where one object/document can be downloaded, and after that is done, another
> > can be requested and downloaded, and again as needed.  
> 
> that's what i meant by "serial multiplexing".

Ok, there seems to be a lot of confusion around here from some
people (no, not you Paul) on how HTTP works so a summary is in
order.  Network operators have to understand how common protocols
impact the network.

Once upon a time, after other things, there became HTTP/1.0.  It requires
a new connection for each transfer.  At one time you were only requesting
a couple of URLs for each document, so that was fine.

Then there were more graphics on the web page.  It was easily
observed that something was very inefficient when making multiple
requests one after the other.  Clearly, if you have a lock-step
protocol using only one simultaneous connection, you have at least
a RTT between each request.  After you add the TCP handshake, you
actually end up with 2 RTTs between the response to each request.
On a LAN, that isn't a big deal.  On a higher latency connection,
it is a huge problem and results in poor efficiency.

So clients started making multiple connections in an attempt to
more efficiently utilize bandwidth.  This worked, both in theory
and in practice.  However, you still end up with the RTT related
slowness of establishing (non-T/TCP) TCP connections.  People also
may have figured out that large numbers of short flows are not
as easily controlled by TCP's congestion control as small numbers
of large flows.  

Hence the idea of "keepalive" (or, using the later name, "persistent")
connections.  A standard of sorts was established to extend HTTP/1.0
to allow for these; it allowed a client to make multiple 
requests in a single TCP connection.  Clients still made multiple
connections, though, since many servers didn't support persistent 
connections and they were already doing it and there are other
advantages.  Now you chop the inter-request delay down to one RTT
by avoiding the TCP handshake.  This also gives a win by putting
less load on the server (lower TCP connection rate) and a loss
because the server will have connections hanging around waiting
for a (relatively short) timeout just in case the client decides
to make another request on that connection.

Then comes HTTP/1.1.  It formalized the idea of a persistent
connection (with a slightly different syntax) into the spec.  It
also allows for "pipelined" connections; with appropriate care,
the client can send requests for multiple documents before it
finishes receiving the first.  This requires careful attention in
the protocol and server and client code to get right; out of the
very small field of deployed HTTP/1.1 clients and servers, few have
this completely right yet.  Even Apache, with a (IMBiasedO) darn
good HTTP/1.1 implementation with very few bugs compared to most
initial deployments of HTTP/1.1 (eg. MSIE 4), isn't yet perfect in
any released version.  The HTTP/1.1 spec also includes some comments
about limiting multiple connections. 

The state of the art in terms of deployed protocols right now is
persistent pipelined connections; most clients don't implement them
yet, but they are getting there.

Yet even with this, there is a perceived value to multiple 
connections.  Why is that?  There are several reasons:

	- that is how it has been done, so we better keep doing it
	- can grab more bandwidth on congested links by using more
	  flows.  In real life, this is a significant factor,
	  more so when you get to ISDN and higher speeds.  It would
	  probably be better for everyone if everyone stopped using
	  multiple connections, but if some do and some don't, those
	  that don't lose out.  This advantage could be eliminated
	  by various algorithms.
	- is not acceptable to have a bunch of small responses 
	  stuck behind one huge response.
	- if you are a proxy, this becomes even more critical.  If
	  you only use one connection to each origin server, if one
	  client requests a huge document from that server, anyone else
	  wanting to make requests to that server would have to wait
	  for the transfer to finish.

These reasons, both the technically valid and the invalid ones,
can not be dismissed out of hand.

> >                                                       But I know not of any
> > protocol for multiplexing so that I can have concurrency of the transfers of
> > multiple images at the same time.  Is there an RFC for this or is it some
> > other kind of document?
> 
> that's not what i meant by "serial multiplexing".  but can someone please
> explain why anyone would want to do this?  you don't get the bits faster by
> opening multiple connections, now that persistent http allows you to avoid
> the inter-object delay that used to be induced by doing a full syn/synack/ack
> for each object.  does seeing the GIFs fill in in parallel really make that
> much difference, if the total page fill time is going to be the same?

Yes, it does make a difference.

A real world nanog example?  Cisco docs.  You may have a 300k HTML
file with a bunch of graphics in it.  You want to read something
partway down the file and it has an image you need to see.  Do you
want to wait for the whole HTML to download, then for each image
to download until it gets to yours or do you want to be able to
start reading ASAP?

The basic idea is that humans are reading documents, and they take some
finite time to read and normally start at the top and work down.  
The faster you can display the start of it, complete with the graphics,
etc. that are contained in that part, the faster the user can get on
with reading the document.

Persistent connections with pipelining come close to the peak of 
efficiency from the network perspective (well, the perspective of
anything sitting on top of TCP), but not from the much harder user
perspective.

Dealing with this is one of the goals of HTTP-NG.  See:

	http://www.w3.org/Protocols/HTTP-NG/

for some details.  I can't comment with more details or on how
HTTP-NG is going because I can't be involved, even as a lurker
because it is the W3C doing it and I don't work for the right
people.

One other additional gain from a multiplexed protocol is the ability
to do graceful aborts of a request in progress, instead of having
to dump the TCP connection and make a new one.  That is a big
win, iff you don't have too much buffering going on at the sending
host between the application and the network.  NNTP is worse in
this way, but it still impacts HTTP especially if you want to
progress to the idea of a single longer-term connection for a whole
string of requests traversing a web site.

HTTP-NG with multiplexed connections (that isn't the only feature
of it, just the relevant one) will fail to convince vendors to
switch to using one multiplexed connection unless the Internet
either has more bandwidth than anyone can use (not likely) or
backbone bandwidth grows far faster than user bandwidth (more
possible, but not likely right now) or ends up giving the same net
bandwidth to n simultaneous TCP connections as it gives to one
multiplexed TCP connection (not possible to do fairly without 
unfairly penalizing things like proxys).

The solution to all this?  Well, that (and a lot of more generlized
stuff) is the whole point of a whole bunch of research.  There are 
known methods to get better than this, but they are not deployed.




More information about the NANOG mailing list