[156805] in North American Network Operators' Group

home help back first fref pref prev next nref lref last post

Re: /. Terabit Ethernet is Dead, for Now

daemon@ATHENA.MIT.EDU (Dan Shechter)
Thu Sep 27 09:29:27 2012

In-Reply-To: <20120927125157.GT9750@leitl.org>
From: Dan Shechter <danshtr@gmail.com>
Date: Thu, 27 Sep 2012 15:26:34 +0200
To: Eugen Leitl <eugen@leitl.org>
Cc: Beowulf@beowulf.org, NANOG list <nanog@nanog.org>
Errors-To: nanog-bounces+nanog.discuss=bloom-picayune.mit.edu@nanog.org

If they would have rolled out 1000G networks now, I guess we will have to
plug in 17 MTP interfaces ;)



HTH,
Dan #13685 (RS/Sec/SP)
 The CCIE troubleshooting blog: http://dans-net.com
 Bring order to your Private VLAN network: http://marathon-networks.com





On Thu, Sep 27, 2012 at 2:51 PM, Eugen Leitl <eugen@leitl.org> wrote:

>
> http://slashdot.org/topic/datacenter/terabit-ethernet-is-dead-for-now/
>
> Terabit Ethernet is Dead, for Now
>
> by Mark Hachman | September 26, 2012
>
> A straw poll of the IEEE's high-speed Ethernet group finds that 400-Gbits=
/s
> is almost unanimously preferred.
>
> Sorry, everybody: terabit Ethernet looks like it will have to wait a whil=
e
> longer.
>
> The IEEE 802.3 Industry Connections Higher Speed Ethernet Consensus group
> met
> this week in Geneva, Switzerland, with attendees concluding=E2=80=94almos=
t to a
> man=E2=80=94that 400 Gbits/s should be the next step in the evolution of =
Ethernet.
> A
> straw poll at its conclusion found that 61 of the 62 attendees that voted
> supported 400 Gbits/s as the basis for the near term =E2=80=9Ccall for in=
terest,=E2=80=9D
> or
> CFI.
>
> The bandwidth call to arms was sounded by a July report by the IEEE, whic=
h
> concluded that, if current trends continue, networks will need to support
> capacity requirements of 1 terabit per second in 2015 and 10 terabits per
> second by 2020. In 2015 there will be nearly 15 billion fixed and
> mobile-networked devices and machine-to-machine connections.
>
> The report goes on to predict that, from 2010 to 2015, global IP traffic
> will
> experience a fourfold increase from 20 exabytes per month in 2010 to 81
> exabytes per month in 2015, a 32 percent CAGR. Storage growth is expected
> to
> grow to 7910 exabytes in 2015, with over half of it accessed via Ethernet=
.
> Of
> course, one of the first places the new, faster Ethernet links will occur
> will be in the data center.
>
> With that in mind, the IEEE 802.3 group began formulating a response.
> However, virtually all attendees seemed to be in agreement before the
> meeting
> opened, as only one presentation focused on the feasibility of one-terabi=
t
> Ethernet, eventually concluding that 400 Gbits/s made more sense in the
> near
> term.
>
> Kai Cui and Peter Stassar from Huawei Technologies suggested that the mos=
t
> cost-effective method for developing a 1-terabit Physical Medium Dependen=
t
> (PMD) would be to leverage today=E2=80=99s 100-Gbit technology, which isn=
=E2=80=99t yet in
> high volume, and therefore not cost-optimized. =E2=80=9C[The] cost target=
 for 1Tb/s
> needs to be at or below 100G cost/bit*sec and required R&D investments
> should
> be modest,=E2=80=9D they wrote as part of their presentation.
>
> =E2=80=9C100GbE technology based architecture would imply 40 lanes at 25G=
, which
> clearly would imply impractically big packages and large amount of
> interface
> signals,=E2=80=9D Cui and Stassar added, which would need to reduce the n=
umber of
> electrical and optical interface lanes to enable a reasonable package siz=
e.
> While alternative modulation formats could be used (5=CE=BBx200G DP-16QAM=
, 4
> bits/symbol, 25G) =E2=80=9Cneither the multi-level nor the phase modulati=
on format
> based technologies have been demonstrated to be sufficiently mature to
> justify usage in client PMDs towards 100Gb/s to 1Tb/s applications.=E2=80=
=9D
>
> They concluded: =E2=80=9C1Tb/s does seem a =E2=80=98bridge too far=E2=80=
=99 at least for the
> coming 3
> to 4 years.=E2=80=9D
>
> Chris Cole of optical components maker Finisar presented the case for a
> 400-Gbit CFI, with backing from Brocade, Cisco, HP, IBM, Intel, Juniper,
> and
> Verizon, among others.
>
> Like Huawei=E2=80=99s Cui and Stassar, Cole indicated that 400-Gbit Ether=
net can
> reuse 100 GbE building blocks, and fits within the existing dense 100 GbE
> roadmap. Faster data rates require =E2=80=9Cexotic=E2=80=9D implementatio=
ns, with higher
> R&D
> investments required and a longer time to market. =E2=80=9CData rates bey=
ond
> 400Gb/s
> require an increasingly impractical number of lanes if 100GbE technology =
is
> reused,=E2=80=9D he said.
>
> 400 Gbit/s also makes more sense than a 4=C3=97100 Gb/s link aggregation,=
 Cole
> added, as fewer items promotes management efficiency. Individual link
> congestion is also a concern: =E2=80=9CWithout faster links, [the] link c=
ount grows
> exponentially, therefore management pain grows exponentially.=E2=80=9D
>
> Cole suggested that a potential 400 Gb/s MAC/PCS ASIC could be fabricated
> in
> either 20- or 28-nm CMOS, using a 400-bit wide bus and a 1 GHz clock rate=
.
> =E2=80=9CThere is a strong desire to reuse 802.3ba, 802.3bj, and 802.3bm =
technology
> building blocks,=E2=80=9D he said.
>
> That=E2=80=99s not to say that terabit Ethernet won=E2=80=99t be needed, =
Cole concluded, or
> 1.6 terabit Ethernet, at that. The timeframes for those followon CFIs cou=
ld
> be between 3 to 6 years, he said.
>
> The CFI hasn=E2=80=99t formally occurred; until it does, nothing has been=
 decided.
> So
> far, the most likely dates for formalizing the CFI will take place in
> either
> November or next month. But at this point, it looks like terabit Ethernet
> is
> a dead duck, at least for the near future.
>
>

home help back first fref pref prev next nref lref last post