rack power question

Marshall Eubanks tme at multicasttech.com
Mon Mar 24 05:12:40 UTC 2008


The interesting thing is how in a way we seem to have come full  
circle. I am sure lots of people can remember large rooms
full of racks of vacuum tube equipment, which required serious power  
and cooling.
On one NASA project I worked on, when the vacuum tube stuff was  
replaced by solid state in the late 1980's,
there was lots of empty floor space and we marveled at how much power  
we were saving. In fact, after
the switch there was almost 2 orders of magnitude too much cooling  
for the new equipment (200 tons to 5 IIRC),
and we had to spend good money to replace the old cooling system with  
a smaller one. Now, we seem to have expanded
to more than fill the previous tube-based power and space  
requirements, and I suspect some people wish they could get
their old cooling plants back.

Regards
Marshall


On Mar 23, 2008, at 5:23 PM, Joel Jaeggli wrote
>
> Ben Butler wrote:
>> There comes a point where you cant physically transfer the energy  
>> using air
>> any more - not less you wana break the laws a physics captin  
>> (couldn't
>> resist sorry) - to your DX system, gas, then water, then in rack  
>> (expensive)
>> cooling, water and CO2.  Sooner or later we will sink the hole  
>> room in oil,
>> much like they use to do with Cray's.
>
> The problem there is actually the thermal gradient involved. the  
> fact of the matter is you're using ~15c air to keep equipment  
> cooled  to ~30c. Your car is probably in the low 20% range as far  
> as thermal efficiency goes, is generating order of 200kw and has an  
> engine compartment enclosing a volume of roughly half a rack... All  
> that waste heat is removed by air, the difference being that it  
> runs a around 250c with some hot spots approaching 900c.
>
> Increase the width of the thermal gradient and you can pull much  
> more heat out of the rack without moving more air.
>
> 15 years ago I would have told you that gallium arsenide would be a  
> lot more common in general purpose semiconductors for precisely  
> this reason.  but silicon has proved superior along a number of  
> other dimensions.
>
>> Alternatively we might need to fit the engineers with crampons,  
>> climbing
>> ropes and ice axes to stop them being blown over by the 70 mph  
>> winds in your
>> datacenter as we try to shift the volumes of area necessary to  
>> transfer the
>> energy back to the HVAC for heat pump exchange to remote chillers  
>> on the
>> roof.
>> In my humble experience, the problems are 1> Heat, 2> Backup UPS,  
>> 3> Backup
>> Generators, 4> LV/HV Supply to building.
>> While you will be very constrained by 4 in terms of upgrades  
>> unless spending
>> a lot of money to upgrade - the practicalities of 1,2&3 mean that  
>> you will
>> have spent a significant amount of money getting to the point  
>> where you need
>> to worry about 4.
>> Given you are not worried about 1, I wonder about the scale of the
>> application or your comprehension of the problem.
>> The bigger trick is planning for upgrades of a live site where you  
>> need to
>> increase Air con, UPS and Generators.
>> Economically, that 10,000KW of electricity has to be paid for in  
>> addition to
>> any charge for the rack space.  Plus margined, credit risked and cash
>> flowed.  The relative charge for the electricity consumption -  
>> which has
>> less about our ability to deliver and cool it in a single rack  
>> versus the
>> cost of having four racks in a 2,500KW datacenter and paying for  
>> the same
>> amount of electric.  Is the racking charge really the significant  
>> expense
>> any more.
>> For the sake of argument, 4 racks at £2500 pa in a 2500KW  
>> datacenter or 1
>> rack at £10,000 pa in a 10000KW datacenter - which would you  
>> rather have?
>> Is the cost of delivering (and cooling) 10000KW to a rack more or  
>> less than
>> 400% of the cost of delivering 2500KW per rack.  I submit that it  
>> is more
>> that 400%.  What about the hardware - per mip / cpu horse power am  
>> I paying
>> more or less in a conventional 1U pizza box format or a high  
>> density blade
>> format - I submit the blades cost more in Capex and there is no  
>> opex saving.
>> What is the point having a high density server solution if I can  
>> only half
>> fill the rack.
>> I think the problem is people (customers) on the whole don't  
>> understand the
>> problem and they can grasp the concept of paying for physical  
>> space, but
>> cant wrap their heads around the more abstract concept of electricity
>> consumed by what you put in the space and paying for that to come  
>> up with a
>> TCO for comparisons.  So they simply see the entire hosting bill and
>> conslude they have to stuff as many processors as possible into  
>> the rack
>> space and if that is a problem is is one for the colo facility to  
>> deliver at
>> the same price.
>> I do find myself increasingly feeling that the current market  
>> direction is
>> simply stupid and had far to much input from sales and marketing  
>> people.
>> Let alone the question of is the customers business efficient in  
>> terms of
>> the amount of CPU compute power required for their business to  
>> generate 1$
>> of customer sales/revenue.
>> Just because some colo customers have cr*ppy business models  
>> delivering
>> marginal benefit for very high computer overheads and an inability  
>> to pay
>> for things in a manner that reflects their worth because they are  
>> incapable
>> of extracting the value from them.  Do we really have to drag the  
>> entire
>> industry down to the lowest common denominator of f*ckwit.
>> Surly we should be asking exactly is driving the demand for high  
>> density
>> computing and in which market sectors and is this actually the best
>> technical solution to solve them problem.  I don't care if IBM, HP  
>> etc etc
>> want to keep selling new shiny boxes each year because they are  
>> telling us
>> we need them - do we really? ...?
>> Kind Regards
>> Ben
>> -----Original Message-----
>> From: owner-nanog at merit.edu [mailto:owner-nanog at merit.edu] On  
>> Behalf Of
>> Valdis.Kletnieks at vt.edu
>> Sent: 23 March 2008 02:34
>> To: Patrick Giagnocavo
>> Cc: nanog at nanog.org
>> Subject: Re: rack power question
>




More information about the NANOG mailing list