[150378] in North American Network Operators' Group

home help back first fref pref prev next nref lref last post

Re: FCoE Deployment

daemon@ATHENA.MIT.EDU (David)
Wed Feb 22 18:11:30 2012

In-Reply-To: <A4428DD467A70E48B98693F1E20B198C14A89B9D@V-ENT02.eu.netappliant.com>
From: David <david@davidswafford.com>
Date: Wed, 22 Feb 2012 18:10:32 -0500
To: Pierce Lynch <p.lynch@netappliant.com>
Cc: "nanog@nanog.org" <nanog@nanog.org>
Errors-To: nanog-bounces+nanog.discuss=bloom-picayune.mit.edu@nanog.org

our reason btw was to cut down on cabling/switch costs, it starts to add up w=
hen you consider how many blades get eated by 1gb copper.  going to DL580s a=
md a few hp chassis.  A chassis used to eat nearly 64 copper 1gb and 32 fibe=
r channel connections.  on FCoE/CNAs, we're literally talking 4 x 10gb cable=
s (16 blades). =20

David

Sent from an email server.

On Feb 22, 2012, at 4:10 PM, Pierce Lynch <p.lynch@netappliant.com> wrote:

>> FCoE was until very recently the only way to do centralized block storage=
=20
>> to the Cisco UCS server blades, so I'd imagine it's quite widely adopted.=

>> That said, we don't run FCoE outside of the UCS <black box> - its uplinks=

>> to the SAN are just regular FC.
>=20
> Agreed, very much the only implementation I have come across FCoE installa=
tions for is Cisco UCS chassis. Personally, it's not something that I have s=
een regularly adopted as of yet outside proprietary hardware configurations s=
uch as UCS deployments.
>=20
> Certainly also keen to understand as to any other use cases and deployment=
s others have implemented using full-blown FCoE.
>=20
> Kind regards,
>=20
> Pierce Lynch
>=20
>=20


home help back first fref pref prev next nref lref last post