Colocation in the US.

Paul Vixie paul at
Thu Jan 25 05:47:46 UTC 2007

> If you have water for the racks:

we've all gotta have water for the chillers. (compressors pull too much power,
gotta use cooling towers outside.)


i love knuerr's stuff.  and with mainframes or blade servers or any other
specialized equipment that has to come all the way down when it's maintained,
it's a fine solution.  but if you need a tech to work on the rack for an
hour, because the rack is full of general purpose 1U's, and you can't do it
because you can't leave the door open that long, then internal heat exchangers
are the wrong solution.

knuerr also makes what they call a "CPU cooler" which adds a top-to-bottom
liquid manifold system for cold and return water, and offers connections to
multiple devices in the rack.  by collecting the heat directly through paste
and aluminum and liquid, and not depending on moving-air, huge efficiency 
gains are possible.  and you can dispatch a tech for hours on end without
having to power off anything in the rack except whatever's being serviced.
note that by "CPU" they mean "rackmount server" in nanog terminology.  CPU's
are not the only source of heat, by a long shot.  knuerr's stuff is expensive
and there's no standard for it so you need knuerr-compatible servers so far.

i envision a stage in the development of 19-inch rack mount stuff, where in
addition to console (serial for me, KVM for everybody else), power, ethernet,
and IPMI or ILO or whatever, there are two new standard connectors on the
back of every server, and we've all got boxes of standard pigtails to connect
them to the rack.  one will be cold water, the other will be return water.
note that when i rang this bell at MFN in 2001, there was no standard nor any
hope of a standard.  today there's still no standard but there IS hope for one.

> (there are other vendors too, of course)

somehow we've got standards for power, ethernet, serial, and KVM.  we need
a standard for cold and return water.  then server vendors can use conduction
and direct transfer rather than forced air and convection.  between all the
fans in the boxes and all the motors in the chillers and condensers and
compressors, we probably cause 60% of datacenter related carbon for cooling.
with just cooling towers and pumps it ought to be more like 15%.  maybe
google will decide that a 50% savings on their power bill (or 50% more
computes per hydroelectric dam) is worth sinking some leverage into this.


that's just creepy.  safe, i'm sure, but i must be old, because it's creepy.

More information about the NANOG mailing list