Fire, Power loss at Fisher Plaza in Seattle
darren at bolding.org
Fri Jul 3 15:57:03 CDT 2009
Power to some of the affected sections of the building has been restored via
existing onsite generators. The central power risers cannot be connected to
current generators in a timely manner due to excessive damage to the
electrical switching equipment (and those generators may still be in
standing water). These provide power to a number of colocated systems.
Temporary generators are on order to be connected to the central risers,
and the site expects that to be complete sometime late this evening. As
best I can tell, there is still no utility power connected to any of the
The AC systems (chiller and crac) are currently not working. It is not
clear to me whether these will be brought back on line when the temporary
generators are available, but I am assuming so.
It was pleasant to see the general positive attitude, sharing of information
and offers of assistance that were made by representatives of the various
tenants, customers and carriers that were on the scene. The usual suspects
(companies and individuals) stepped up and took care of things, as they
always seem to.
On Fri, Jul 3, 2009 at 1:39 PM, Leo Bicknell <bicknell at ufp.org> wrote:
> In a message written on Fri, Jul 03, 2009 at 03:22:14PM -0400, Sean Donelan
> > Are you better off with a single "tier 4" data center, multiple
> > "tier 1" data centers, or something in between?
> It depends entirely on your dependency on connectivity.
> One extreme is something like a Central Office. Lots of cables
> from end-sites terminate in the building. Having a duplicate of
> the head end termination equipment on the opposite coast is darn
> near useless. If the building goes down, the users going through
> it go down. "Tier 4" is probably a good idea.
> The other extreme is a pure content play (YouTube, Google Search).
> Users don't care which data center they hit (within reason), and
> indeed often don't know. You're better off having data centers
> spread out all over, both so you're more likely to only loose one
> at a time, but also so that the loss of one is relatively unimportant.
> Once you're already in this architecture, Tier 1 is generally
> There are two problems though. First, most folks don't fit neatly
> in one of these buckets. They have some ties to local infrastructure,
> and some items which are not tied. Latency as a performance penality
> is very subjective. A backup 1000 miles away is fine for many
> things, and very bad for some things.
> Second, most folks don't have choices. It would be nice if most
> cities had three each Tier 1, 2, 3 and 4 data centers available so
> there was choice and competition but that's rare.
> Very few companies consider these choices rationally; often because
> choices are made by different groups. I am amazed how many times
> inside of an ISP the folks deploying the DNS and mail servers are
> firewalled from the folks deploying the network, to the point where
> you have to get to the President to reach common management. This
> leads to them making choices in opposite directions that end up
> costing extra money the company, and often resulting in a much lower
> uptimes than expected. Having the network group deploy a single point
> of failure to the "Tier 4" data center the server guys required is,
> well, silly.
> However, more important than all of this is testing your infrastructure.
> Would you feel comfortable walking into your data center and ripping
> the power cable out of some bit of equipment at random _right now_?
> If not, you have no faith your equipment will work in an outage.
> Leo Bicknell - bicknell at ufp.org - CCIE 3440
> PGP keys at http://www.ufp.org/~bicknell/
-- Darren Bolding --
-- darren at bolding.org --
More information about the NANOG