[103247] in North American Network Operators' Group

home help back first fref pref prev next nref lref last post

Re: rack power question

daemon@ATHENA.MIT.EDU (Marshall Eubanks)
Mon Mar 24 01:13:38 2008

In-Reply-To: <47E6CA67.7010808@bogus.com>
Cc: Ben Butler <ben.butler@c2internet.net>, nanog@merit.edu
From: Marshall Eubanks <tme@multicasttech.com>
Date: Mon, 24 Mar 2008 01:12:40 -0400
To: Joel Jaeggli <joelja@bogus.com>
Errors-To: owner-nanog@merit.edu


The interesting thing is how in a way we seem to have come full =20
circle. I am sure lots of people can remember large rooms
full of racks of vacuum tube equipment, which required serious power =20
and cooling.
On one NASA project I worked on, when the vacuum tube stuff was =20
replaced by solid state in the late 1980's,
there was lots of empty floor space and we marveled at how much power =20=

we were saving. In fact, after
the switch there was almost 2 orders of magnitude too much cooling =20
for the new equipment (200 tons to 5 IIRC),
and we had to spend good money to replace the old cooling system with =20=

a smaller one. Now, we seem to have expanded
to more than fill the previous tube-based power and space =20
requirements, and I suspect some people wish they could get
their old cooling plants back.

Regards
Marshall


On Mar 23, 2008, at 5:23 PM, Joel Jaeggli wrote
>
> Ben Butler wrote:
>> There comes a point where you cant physically transfer the energy =20
>> using air
>> any more - not less you wana break the laws a physics captin =20
>> (couldn't
>> resist sorry) - to your DX system, gas, then water, then in rack =20
>> (expensive)
>> cooling, water and CO2.  Sooner or later we will sink the hole =20
>> room in oil,
>> much like they use to do with Cray's.
>
> The problem there is actually the thermal gradient involved. the =20
> fact of the matter is you're using ~15c air to keep equipment =20
> cooled  to ~30c. Your car is probably in the low 20% range as far =20
> as thermal efficiency goes, is generating order of 200kw and has an =20=

> engine compartment enclosing a volume of roughly half a rack... All =20=

> that waste heat is removed by air, the difference being that it =20
> runs a around 250c with some hot spots approaching 900c.
>
> Increase the width of the thermal gradient and you can pull much =20
> more heat out of the rack without moving more air.
>
> 15 years ago I would have told you that gallium arsenide would be a =20=

> lot more common in general purpose semiconductors for precisely =20
> this reason.  but silicon has proved superior along a number of =20
> other dimensions.
>
>> Alternatively we might need to fit the engineers with crampons, =20
>> climbing
>> ropes and ice axes to stop them being blown over by the 70 mph =20
>> winds in your
>> datacenter as we try to shift the volumes of area necessary to =20
>> transfer the
>> energy back to the HVAC for heat pump exchange to remote chillers =20
>> on the
>> roof.
>> In my humble experience, the problems are 1> Heat, 2> Backup UPS, =20
>> 3> Backup
>> Generators, 4> LV/HV Supply to building.
>> While you will be very constrained by 4 in terms of upgrades =20
>> unless spending
>> a lot of money to upgrade - the practicalities of 1,2&3 mean that =20
>> you will
>> have spent a significant amount of money getting to the point =20
>> where you need
>> to worry about 4.
>> Given you are not worried about 1, I wonder about the scale of the
>> application or your comprehension of the problem.
>> The bigger trick is planning for upgrades of a live site where you =20=

>> need to
>> increase Air con, UPS and Generators.
>> Economically, that 10,000KW of electricity has to be paid for in =20
>> addition to
>> any charge for the rack space.  Plus margined, credit risked and cash
>> flowed.  The relative charge for the electricity consumption - =20
>> which has
>> less about our ability to deliver and cool it in a single rack =20
>> versus the
>> cost of having four racks in a 2,500KW datacenter and paying for =20
>> the same
>> amount of electric.  Is the racking charge really the significant =20
>> expense
>> any more.
>> For the sake of argument, 4 racks at =A32500 pa in a 2500KW =20
>> datacenter or 1
>> rack at =A310,000 pa in a 10000KW datacenter - which would you =20
>> rather have?
>> Is the cost of delivering (and cooling) 10000KW to a rack more or =20
>> less than
>> 400% of the cost of delivering 2500KW per rack.  I submit that it =20
>> is more
>> that 400%.  What about the hardware - per mip / cpu horse power am =20=

>> I paying
>> more or less in a conventional 1U pizza box format or a high =20
>> density blade
>> format - I submit the blades cost more in Capex and there is no =20
>> opex saving.
>> What is the point having a high density server solution if I can =20
>> only half
>> fill the rack.
>> I think the problem is people (customers) on the whole don't =20
>> understand the
>> problem and they can grasp the concept of paying for physical =20
>> space, but
>> cant wrap their heads around the more abstract concept of electricity
>> consumed by what you put in the space and paying for that to come =20
>> up with a
>> TCO for comparisons.  So they simply see the entire hosting bill and
>> conslude they have to stuff as many processors as possible into =20
>> the rack
>> space and if that is a problem is is one for the colo facility to =20
>> deliver at
>> the same price.
>> I do find myself increasingly feeling that the current market =20
>> direction is
>> simply stupid and had far to much input from sales and marketing =20
>> people.
>> Let alone the question of is the customers business efficient in =20
>> terms of
>> the amount of CPU compute power required for their business to =20
>> generate 1$
>> of customer sales/revenue.
>> Just because some colo customers have cr*ppy business models =20
>> delivering
>> marginal benefit for very high computer overheads and an inability =20=

>> to pay
>> for things in a manner that reflects their worth because they are =20
>> incapable
>> of extracting the value from them.  Do we really have to drag the =20
>> entire
>> industry down to the lowest common denominator of f*ckwit.
>> Surly we should be asking exactly is driving the demand for high =20
>> density
>> computing and in which market sectors and is this actually the best
>> technical solution to solve them problem.  I don't care if IBM, HP =20=

>> etc etc
>> want to keep selling new shiny boxes each year because they are =20
>> telling us
>> we need them - do we really? ...?
>> Kind Regards
>> Ben
>> -----Original Message-----
>> From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On =20
>> Behalf Of
>> Valdis.Kletnieks@vt.edu
>> Sent: 23 March 2008 02:34
>> To: Patrick Giagnocavo
>> Cc: nanog@nanog.org
>> Subject: Re: rack power question
>


home help back first fref pref prev next nref lref last post