[130039] in North American Network Operators' Group
Re: Routers in Data Centers
daemon@ATHENA.MIT.EDU (Joel Jaeggli)
Sun Sep 26 14:08:40 2010
From: Joel Jaeggli <joelja@bogus.com>
To: Chris Adams <cmadams@hiwaay.net>
In-Reply-To: <20100926174755.GB18648@hiwaay.net>
Date: Sun, 26 Sep 2010 11:09:24 -0700
Cc: "nanog@nanog.org" <nanog@nanog.org>
Errors-To: nanog-bounces+nanog.discuss=bloom-picayune.mit.edu@nanog.org
Joel's widget number 2
On Sep 26, 2010, at 10:47, Chris Adams <cmadams@hiwaay.net> wrote:
> Once upon a time, Joel Jaeggli <joelja@bogus.com> said:
>> On Sep 26, 2010, at 8:26, Chris Adams <cmadams@hiwaay.net> wrote:
>>> There are servers and storage arrays that have a front that is =
nothing
>>> but hot-swap hard drive bays (plugged into backplanes), and they've =
been
>>> doing front-to-back cooling since day one. Maybe the router vendors
>>> need to buy a Dell, open the case, and take a look.
>>=20
>> The backplane for a sata disk array is 8 wires per drive plus a =
common power bus.
>=20
> Server vendors managed cooling just fine for years with 80 pin SCA
> connectors. Hard drives are also harder to cool, as they are a solid
> block, filling the space, unlike a card of chips.
It's the same 80 wires on every single drive in the string.
There are fewer conductors embedded in 12 drive sca backplane as there =
are in a 12 drive sata backplane, in both cases they are generally two =
layer pcbs. Compared to what 10+ layer pcbs that are a approaching 1/4" =
thick on the router.=20
Hard drives are 6-12w each, a processor complex that's north of 200w per =
card is a rather different cooling exercise. =20
> I'm not saying the problems are the same, but I am saying that a
> backplane making cooling "hard" is not a good excuse, especially when
> the small empty chassis costs $10K+.
> --=20
> Chris Adams <cmadams@hiwaay.net>
> Systems and Network Administrator - HiWAAY Internet Services
> I don't speak for anybody but myself - that's enough trouble.
>=20