[103439] in North American Network Operators' Group

home help back first fref pref prev next nref lref last post

Re: latency (was: RE: cooling door)

daemon@ATHENA.MIT.EDU (Frank Coluccio)
Sun Mar 30 17:53:44 2008

From: Frank Coluccio <frank@dticonsulting.com>
To: Frank Coluccio <frank@dticonsulting.com>,
        Mikael Abrahamsson <swmike@swm.pp.se>
Reply-To: frank@dticonsulting.com
Date: Sun, 30 Mar 2008 16:47:47 -0500
Cc: nanog@merit.edu
Errors-To: owner-nanog@merit.edu


Mikael, I see your points more clearly now in respect to the number of turns
affecting latency. In analyzing this further, however, it becomes apparent =
that
the collapsed backbone regimen may, in many scenarios offer far fewer
opportunities for turns, and more occasions for others.=20

To the former class of winning applications, because it eliminates local
access/distribution/aggregation switches and then an entire lineage of
hierarchical in-building routing elements.=20

To the latter class of loser applications, no doubt, if a collapsed backbone
design were to be dropped-shipped in place on a Friday Evening, as is, the =
there
would surely be some losers that would require re-designing, or maybe simpl=
y some
re-tuning, or they may need to be treated as one-offs entirely.=20

BTW, in case there is any confusion concerning my earlier allusion to "SMB"=
, it
had nothing to do with the size of message blocks, protocols, or anything e=
lse
affecting a transaction profile's latency numbers. Instead, I was referring=
 to
the "_s_mall-to-_m_edium-sized _b_usiness" class of customers that the cable
operator Bright House Networks was targeting with its passive optical netwo=
rk
business-grade offering, fwiw.
--

Mikael, All, I truly appreciate the comments and criticisms you've offered =
on
this subject up until now in connection with the upstream hypothesis that b=
egan
with a post by Michael Dillon. However, I shall not impose this topic on the
larger audience any further. I would, however, welcome a continuation _offl=
ist _
with anyone so inclined. If anything worthwhile results I'd be pleased to p=
ost it
here at a later date. TIA.

Frank A. Coluccio
DTI Consulting Inc.
212-587-8150 Office
347-526-6788 Mobile

On Sun Mar 30  3:17 , Mikael Abrahamsson  sent:

>On Sat, 29 Mar 2008, Frank Coluccio wrote:
>
>> Understandably, some applications fall into a class that requires very-s=
hort
>> distances for the reasons you cite, although I'm still not comfortable w=
ith the
>> setup you've outlined. Why, for example, are you showing two Ethernet sw=
itches
>> for the fiber option (which would naturally double the switch-induced la=
tency),
>> but only a single switch for the UTP option?
>
>Yes, I am showing a case where you have switches in each rack so each rack=
=20
>is uplinked with a fiber to a central aggregation switch, as opposed to=20
>having a lot of UTP from the rack directly into the aggregation switch.
>
>> Now, I'm comfortable in ceding this point. I should have made allowances=
 for this
>> type of exception in my introductory post, but didn't, as I also omitted=
 mention
>> of other considerations for the sake of brevity. For what it's worth, pr=
opagation
>> over copper is faster propagation over fiber, as copper has a higher nom=
inal
>> velocity of propagation (NVP) rating than does fiber, but not significan=
tly
>> greater to cause the difference you've cited.
>
>The 2/3 speed of light in fiber as opposed to propagation speed in copper=
=20
>was not in my mind.
>
>> As an aside, the manner in which o-e-o and e-o-e conversions take place =
when
>> transitioning from electronic to optical states, and back, affects laten=
cy
>> differently across differing link assembly approaches used. In cases whe=
re 10Gbps
>
>My opinion is that the major factors of added end-to-end latency in my=20
>example is that the packet has to be serialisted three times as opposed to=
=20
>once and there are three lookups instead of one. Lookups take time,=20
>putting the packet on the wire take time.
>
>Back in the 10 megabit/s days, there were switches that did cut-through,=
=20
>ie if the output port was not being used the instant the packet came in,=
=20
>it could start to send out the packet on the outgoing port before it was=
=20
>completely taken in on the incoming port (when the header was received,=20
>the forwarding decision was taken and the equipment would start to send=20
>the packet out before it was completely received from the input port).
>
>> By chance, is the "deserialization" you cited earlier, perhaps related t=
o this
>> inverse muxing process? If so, then that would explain the disconnect, a=
nd if it
>> is so, then one shouldn't despair, because there is a direct path to avo=
iding
this.
>
>No, it's the store-and-forward architecture used in all modern equipment=
=20
>(that I know of). A packet has to be completely taken in over the wire=20
>into a buffer, a lookup has to be done as to where this packet should be=
=20
>put out, it needs to be sent over a bus or fabric, and then it has to be=
=20
>clocked out on the outgoing port from another buffer. This adds latency in=
=20
>each switch hop on the way.
>
>As Adrian Chadd mentioned in the email sent after yours, this can of=20
>course be handled by modifying or creating new protocols that handle this=
=20
>fact. It's just that with what is available today, this is a problem. Each=
=20
>directory listing or file access takes a bit longer over NFS with added=20
>latency, and this reduces performance in current protocols.
>
>Programmers who do client/server applications are starting to notice this=
=20
>and I know of companies that put latency-inducing applications in the=20
>development servers so that the programmer is exposed to the same=20
>conditions in the development environment as in the real world. This means=
=20
>for some that they have to write more advanced SQL queries to get=20
>everything done in a single query instead of asking multiple and changing=
=20
>the queries depending on what the first query result was.
>
>Also, protocols such as SMB and NFS that use message blocks over TCP have=
=20
>to be abandonded and replaced with real streaming protocols and large=20
>window sizes. Xmodem wasn't a good idea back then, it's not a good idea=20
>now (even though the blocks now are larger than the 128 bytes of 20-30=20
>years ago).
>
>--=20
>Mikael Abrahamsson    email: swmike@swm.pp.se



home help back first fref pref prev next nref lref last post