MTU of the Internet?

Phil Howard phil at charon.milepost.com
Mon Feb 9 15:02:35 UTC 1998


Patrick McManus writes:

> At the risk of introducing meaningful background literature:
>      ftp://ds.internic.net/rfc/rfc2068.txt
> 
> I direct folks to 14.36.1 "Byte Ranges" which when interleaved with
> pipelined requests comes very close to achieving client-driven
> multiplexing that I'd suggest from a UI pov will behave much better
> than the multiple connections method (eliminating the cost of tcp
> congestion control but at the cost of some application protocol
> overhead). 

More than application overhead, I suspect the biggest problem with this
otherwise good idea is that it won't be implemented corrently by the
browsers or the servers.

For example on the server end, it would see multiple requests for the
same object, at different byte ranges.  If that object is being created
on the fly by a program process (e.g. CGI) the browser won't have a
clue of the size.

What is the correct behaviour of the server if the request is made for
bytes 0-2047 of an object which invokes a CGI program to create that
object?  Obviously it can send the first 2048 bytes, but then what?
Should it leave the process pipe blocked until the next request comes
in?  One httpd listener might well have to have dozens of these stalled
processes.  Should they all remain there until the persistent connection
is broken?

Of course with multiple connections, you have all these processes, anyway.
But at least you know when the process should go away (when the connection
is dropped).

If the persistent connection gets dropped before all the object get loaded,
then loading _must_ start from the beginning, since objects may now become
inconsistent (a different GIF image can be created by a new instance of the
program that generates it).

Of course, all of this can be done.  But can you trust the developers of
every browser and every server to get it right?  What I am saying is that
if this is to be pursued, it needs to be pursued with a lot of details
addressed that even the RFC doesn't seem to touch on.

Consider CGI.  Should the server start a new instance of CGI for each range
request, passing that request via the CGI environment?  Or should the server
keep each CGI persistent as long as each range request is sequential to the
previous one?  What if there are two different requests for the same path,
which in the ordinary case can indeed generate distinctly different objects
(not cacheable).  How would the server know which of them to continue when
the next range request comes in (previously the distinction is managed by
the connection).

While I can see that persistent connections with range requests can solve
many things, I believe the implementations will botch it up in most cases
to the extreme that it won't get used.  A subchannelized method of doing
request/response transactions over a single persistent connection would
handle more (if not all) of these cases better (IMHO).

-- 
Phil Howard | suck7it9 at spammer9.com end4it79 at spammer9.com stop5269 at anywhere.net
  phil      | no6spam9 at anywhere.net stop7643 at dumb8ads.net crash384 at no54ads0.org
    at      | stop5it2 at dumbads1.com no2spam8 at no56ads2.org end2it50 at no7place.net
  milepost  | stop7it7 at no98ads8.edu stop3538 at anywhere.com w5x5y6z3 at no9place.net
    dot     | end5ads1 at dumbads1.com no1way00 at spammer0.net eat20me8 at noplace9.edu
  com       | end5ads6 at spammer2.net ads0suck at noplace9.org die8spam at dumbads1.net



More information about the NANOG mailing list