[130038] in North American Network Operators' Group

home help back first fref pref prev next nref lref last post

Re: Routers in Data Centers

daemon@ATHENA.MIT.EDU (Chris Adams)
Sun Sep 26 13:48:15 2010

Date: Sun, 26 Sep 2010 12:47:55 -0500
From: Chris Adams <cmadams@hiwaay.net>
To: Joel Jaeggli <joelja@bogus.com>
Mail-Followup-To: Chris Adams <cmadams@hiwaay.net>,
	Joel Jaeggli <joelja@bogus.com>,
	"nanog@nanog.org" <nanog@nanog.org>
In-Reply-To: <7392622F-F7CF-446B-A7AF-E1CD5516039C@bogus.com>
Cc: "nanog@nanog.org" <nanog@nanog.org>
Errors-To: nanog-bounces+nanog.discuss=bloom-picayune.mit.edu@nanog.org

Once upon a time, Joel Jaeggli <joelja@bogus.com> said:
> On Sep 26, 2010, at 8:26, Chris Adams <cmadams@hiwaay.net> wrote:
> > There are servers and storage arrays that have a front that is nothing
> > but hot-swap hard drive bays (plugged into backplanes), and they've been
> > doing front-to-back cooling since day one.  Maybe the router vendors
> > need to buy a Dell, open the case, and take a look.
> 
> The backplane for a sata disk array is 8 wires per drive plus a common power bus.

Server vendors managed cooling just fine for years with 80 pin SCA
connectors.  Hard drives are also harder to cool, as they are a solid
block, filling the space, unlike a card of chips.

I'm not saying the problems are the same, but I am saying that a
backplane making cooling "hard" is not a good excuse, especially when
the small empty chassis costs $10K+.
-- 
Chris Adams <cmadams@hiwaay.net>
Systems and Network Administrator - HiWAAY Internet Services
I don't speak for anybody but myself - that's enough trouble.


home help back first fref pref prev next nref lref last post