multicast seen as equivalent of caching packets

smd at smd at
Tue Jun 15 23:35:17 UTC 1999

Jamie Scheinblum writes:

| While this thread is slowly drifting, I disagree with your assertion that so
| much of the web traffic is cacheable (nlanr's caching effort, if I remember,
| only got around 60% of requests hit in the cache, pooled over a large number
| of clients.  That probably should be the correct percentage of cacheable
| content on the net).  If anything, the net is moving to be *more* dynamic.

I'm not sure why caching tends to mean exclusively "web caching".

If you think about it briefly, Vadim's assertion that "packet caching"
and "multicast distribution" are indistinguishable if the packets are
retained in the cache for essentially 0 time.

That is, if you missed your request to the cache for that packet,
it's gone.

This similarity has been explored in terms of developing more scalable
reliable multicast than ACK or NACK implosions to the source can provide.
Notably, most work in this field involves the packet cache retention
time increasing substantially, with a "please send me packet XYZ" being
sent back towards the root of the [local-]source-based spanning tree.

"Packet caches" closer to the receiver reduce the distance across
a network a NACK message towards a source must travel at the cost
of state retention and processing of NACKs themselves done within
the "packet caches".   One set of schemes seems to involve longer
retention times in the "packet cache" the closer the "packet cache"
is towards the sender (which may simply be the root of a local
spanning tree).

Some work in self-organizing cache hierarchies sometimes looks spookily
like recent work that has gone into native multicast deployment and
thoughts about putting a chainsaw through RTP.

The tradeoff towards more memory used in the "packet caches" makes
me a little nervous given that "packet caches" in the native multicast
model are routers.   Interestingly, Vadim's model for a whomping big router
seems to be really well suited to holding lots and lots of packets around
in a cache.

However, when you consider a router which doesn't have the ability
to resend multicast packets sent through it, and instead merely punts
the responsibility for retransmission sourcewards, this is fundamentally
the same as a cache miss being handled transparently.

I think that the multicasting = caching assertion holds water reasonably well.

All that's really needed is to understand that the original group join
by the listener sets up state whereby the local cache is asked to 
retreive a set of packets, that the local cache asks the next cache in
the hierarchy and so on to the root of the hierarchy, which asks the source
for each of these packets.    The asking for each packet, however, is done
_implicitly_ until the receiver leaves the group.

It would be hard for a receiver with no direct visibility of the network
beyond the local cache to distinguish between a series of packets coming
exclusively from storage on the local multicast router and the same series
of packets which came across the network from storage on a remote 
multicast transmitter.

I think Vadim's point is that accepting the validity of the 
multicasting = caching assertion allows one to consider doing 
a better job of reducing the consumption of network resources 
by replayable content than the use of native multicast does.

[Note that any content that can benefit from retransmission
of lost packets is inherently replayable.]

This does not, however, mean that deploying a system which does
a better job is simple or even likely to be considered given the
underutilization of Internet multicasting in the first place. 

(This is perhaps why Peter Lothberg and company have been working
fairly hard at enabling the inflation of the use of Internet multicast,
since the deployment costs of native IP multicast are so small that
the ultimate non-scalability of IP multicasting (or multicasting
in general if you accept Vadim's argument) does not prevent people
from turning on PIM/SM+mBGP+MSDP.   First you roll (excuse the pun) out
the existing stuff and get it used, then you work on making it remain
usable in the face of fundamental scaling problems.  Welcome to the
normal evolutionary path in the Internet...).


More information about the NANOG mailing list