[156937] in North American Network Operators' Group

home help back first fref pref prev next nref lref last post

Re: /. Terabit Ethernet is Dead, for Now

daemon@ATHENA.MIT.EDU (Masataka Ohta)
Sun Sep 30 18:37:53 2012

Date: Mon, 01 Oct 2012 07:36:39 +0900
From: Masataka Ohta <mohta@necom830.hpcl.titech.ac.jp>
To: joel jaeggli <joelja@bogus.com>
In-Reply-To: <5068C07D.2060203@bogus.com>
Cc: nanog@nanog.org
Errors-To: nanog-bounces+nanog.discuss=bloom-picayune.mit.edu@nanog.org

joel jaeggli wrote:

>>> The problem is that physical layer of 100GE (with 10*10G) and
>>> 10*10GE are identical (if same plug and cable are used both for
>>> 100GE and 10*10GE).
>> Interesting.    Well,  I would say if there are no technical
>> improvements that will significantly improve performance over the best
>> possible carrier Ethernet bonding implementation and   no cost savings
>> at the physical layer  over picking the higher data rate physical
>> layer standard,  _after_   considering  the increased hardware costs
>> due to newly manufactured components for a standard that is just
>> newer.
> There is a real-estate problem. 10 sfp+ connectors takes a lot more 
> space than one qsfp+. mtp/mpo connectors and the associated trunk ribbon 
> cables are a lot more compact than the equivalent 10Gbe footprint 
> terminated as LC.

That's why I wrote:

>>> (if same plug and cable are used both for
>>> 100GE and 10*10GE).

As is mentioned in 40G thread, 24 Port 40GE interface module
of Extreme BD X8 can be used as 96 port 10GE.

> When you add cwdm as 40Gb/s lr4 does the fiber count
> drops by a lot.

That's also possible with 4*10GE and 4*10GE is a lot more
flexible to enable 3*10GE failure mode trivially and allows
for very large skew.

							Masataka Ohta



home help back first fref pref prev next nref lref last post