latency (was: RE: cooling door)

Frank Coluccio frank at dticonsulting.com
Sun Mar 30 21:51:21 UTC 2008


Silly me. I didn't mean "turns" alone, but also intended to include the number of
state "transitions" (e-o, o-e, e-e, etc.) in my preceding reply, as well.

Frank A. Coluccio
DTI Consulting Inc.
212-587-8150 Office
347-526-6788 Mobile

On Sun Mar 30 16:47 , Frank Coluccio  sent:

>Mikael, I see your points more clearly now in respect to the number of turns
>affecting latency. In analyzing this further, however, it becomes apparent that
>the collapsed backbone regimen may, in many scenarios offer far fewer
>opportunities for turns, and more occasions for others. 
>
>To the former class of winning applications, because it eliminates local
>access/distribution/aggregation switches and then an entire lineage of
>hierarchical in-building routing elements. 
>
>To the latter class of loser applications, no doubt, if a collapsed backbone
>design were to be dropped-shipped in place on a Friday Evening, as is, the there
>would surely be some losers that would require re-designing, or maybe simply some
>re-tuning, or they may need to be treated as one-offs entirely. 
>
>BTW, in case there is any confusion concerning my earlier allusion to "SMB", it
>had nothing to do with the size of message blocks, protocols, or anything else
>affecting a transaction profile's latency numbers. Instead, I was referring to
>the "_s_mall-to-_m_edium-sized _b_usiness" class of customers that the cable
>operator Bright House Networks was targeting with its passive optical network
>business-grade offering, fwiw.
>--
>
>Mikael, All, I truly appreciate the comments and criticisms you've offered on
>this subject up until now in connection with the upstream hypothesis that began
>with a post by Michael Dillon. However, I shall not impose this topic on the
>larger audience any further. I would, however, welcome a continuation _offlist _
>with anyone so inclined. If anything worthwhile results I'd be pleased to post it
>here at a later date. TIA.
>
>Frank A. Coluccio
>DTI Consulting Inc.
>212-587-8150 Office
>347-526-6788 Mobile
>
>On Sun Mar 30  3:17 , Mikael Abrahamsson  sent:
>
>>On Sat, 29 Mar 2008, Frank Coluccio wrote:
>>
>>> Understandably, some applications fall into a class that requires very-short
>>> distances for the reasons you cite, although I'm still not comfortable with the
>>> setup you've outlined. Why, for example, are you showing two Ethernet switches
>>> for the fiber option (which would naturally double the switch-induced latency),
>>> but only a single switch for the UTP option?
>>
>>Yes, I am showing a case where you have switches in each rack so each rack 
>>is uplinked with a fiber to a central aggregation switch, as opposed to 
>>having a lot of UTP from the rack directly into the aggregation switch.
>>
>>> Now, I'm comfortable in ceding this point. I should have made allowances for this
>>> type of exception in my introductory post, but didn't, as I also omitted mention
>>> of other considerations for the sake of brevity. For what it's worth, propagation
>>> over copper is faster propagation over fiber, as copper has a higher nominal
>>> velocity of propagation (NVP) rating than does fiber, but not significantly
>>> greater to cause the difference you've cited.
>>
>>The 2/3 speed of light in fiber as opposed to propagation speed in copper 
>>was not in my mind.
>>
>>> As an aside, the manner in which o-e-o and e-o-e conversions take place when
>>> transitioning from electronic to optical states, and back, affects latency
>>> differently across differing link assembly approaches used. In cases where 10Gbps
>>
>>My opinion is that the major factors of added end-to-end latency in my 
>>example is that the packet has to be serialisted three times as opposed to 
>>once and there are three lookups instead of one. Lookups take time, 
>>putting the packet on the wire take time.
>>
>>Back in the 10 megabit/s days, there were switches that did cut-through, 
>>ie if the output port was not being used the instant the packet came in, 
>>it could start to send out the packet on the outgoing port before it was 
>>completely taken in on the incoming port (when the header was received, 
>>the forwarding decision was taken and the equipment would start to send 
>>the packet out before it was completely received from the input port).
>>
>>> By chance, is the "deserialization" you cited earlier, perhaps related to this
>>> inverse muxing process? If so, then that would explain the disconnect, and if it
>>> is so, then one shouldn't despair, because there is a direct path to avoiding
>this.
>>
>>No, it's the store-and-forward architecture used in all modern equipment 
>>(that I know of). A packet has to be completely taken in over the wire 
>>into a buffer, a lookup has to be done as to where this packet should be 
>>put out, it needs to be sent over a bus or fabric, and then it has to be 
>>clocked out on the outgoing port from another buffer. This adds latency in 
>>each switch hop on the way.
>>
>>As Adrian Chadd mentioned in the email sent after yours, this can of 
>>course be handled by modifying or creating new protocols that handle this 
>>fact. It's just that with what is available today, this is a problem. Each 
>>directory listing or file access takes a bit longer over NFS with added 
>>latency, and this reduces performance in current protocols.
>>
>>Programmers who do client/server applications are starting to notice this 
>>and I know of companies that put latency-inducing applications in the 
>>development servers so that the programmer is exposed to the same 
>>conditions in the development environment as in the real world. This means 
>>for some that they have to write more advanced SQL queries to get 
>>everything done in a single query instead of asking multiple and changing 
>>the queries depending on what the first query result was.
>>
>>Also, protocols such as SMB and NFS that use message blocks over TCP have 
>>to be abandonded and replaced with real streaming protocols and large 
>>window sizes. Xmodem wasn't a good idea back then, it's not a good idea 
>>now (even though the blocks now are larger than the 128 bytes of 20-30 
>>years ago).
>>
>>-- 
>>Mikael Abrahamsson    email: swmike at swm.pp.se
>
>





More information about the NANOG mailing list