Data Center Wiring Standards

William Yardley nanog at veggiechinese.net
Sat Sep 9 01:18:42 UTC 2006


[ Disclaimer - my experience is as someone who has setup lots of racks,
dealt with a number of colocation facilities and cabling contractors.
However, I haven't ever run a colo. ]

On Fri, Sep 08, 2006 at 05:36:09PM -0700, Rick Kunkel wrote:

> Can anyone tell me the standard way to deal with patch panels, racks,
> and switches in a data center used for colocation?
 
> Right now, we have a rack filled with nothing but patch panels.  We
> have some switches in another rack, and colocation customers scattered
> around other racks.  When a new customer comes in, we run a long wire
> from their computer(s) and/or other device(s) to the patch panel.
> Then, from the appropriate block connectors on the back of the panel,
> we run another wire that terminates in a RJ-45 to plug into the
> switch.

This way of doing things *can* be done neatly in some cases - it really
depends on how you have things setup, your size, and what your
customers' needs are.

For large carrier neutral places like Equinix, Switch and Data, etc.,
where each customer usually has a small number of links coming into
their cage, and things are pretty non-standard (i.e., customers have
stuff other than a few ethernet cables going to their equipment), that's
pretty much what they do - run a long cable through overhead cable
trough or fiber tray, and terminate it in a patch panel in the
customer's rack.

> My thoughts go like this:  We put a patch panel in each rack.  Each of
> these patch panels is permanently (more or less) wired to a patch
> panel in our main patch cabinet.  So, essentially what you've got is a
> main patch cabinet with a patch panel that corresponds to a patch
> panel in each other cabinet.  Making connection is cinchy and only
> requires 3-6 foot off-the-shelf cables.

This is a better way to do it IF your customers have pretty standard
needs. One facility I've worked at has 6 cables bundled together (not 25
pr cable, but similar - 6 cat5 or cat6 cables bundled within some sort
of jacket), going into a patch panel. 25 pair or bundled cabling will
make things neater, but usually costs more.

Obviously, be SUPER anal retentive about labelling, testing, running
cables, etc., or it's not worth doing at all. Come up with a scheme for
labelling (in our office, it's "a.b.c where a is the rack number, b is
the rack position, and c is the port number) and stick to it. Get a
labeller designed for cables if you don't already have one (a Brady,
industrial P-Touch, Panduit, or something similar). Make sure there is a
standard way for everything, and document / enforce the standard.
Someone has to be the cable n**i (does that count as a Godwin?) or
things will get messy fast.

If you're doing a standard setup to each rack, hire someone to do it for
you if you can afford it. It will be expensive, but probably worth it
unless you're really good (and fast) at terminating cable.

Either way, use (in the customer's rack) one of the patch panels that's
modular, so you can put a different kind of connector in each slot. That
gives you more flexibility later.

In terms of whether patch panels / switches should be mixed in the same
rack; opinions differ. It's of course difficult to deal with terminating
patch panels when there are also big fat switches in the same rack.

I've usually done a mix anyway, but for your application, it might be
better to alternate, running the connections sideways.

Invest in lots of cable management, the bigger, the better. I assume you
already have cable management on these racks? 

I like the Panduit horizontal ones, and either the Panduit vertical
ones, or the CPI "MCS" ones. If you're doing a new buildout, or can
start a new set of racks, put extra space between them and do 10" wide
cable management sections (or bigger).

I can give you some suggestions in terms of vendors and cabling outfits,
though most of the people I know of are in the Southern California area.

> I talked to someone else in the office here, and they believe that
> they've seen it done with a switch in each cabinet, although they
> couldn't remember is there was a patch panel as well.

Ok, so if most of your customers have a full rack or half rack, I would
suggest not putting a switch in each rack. In that case, you should
charge them a port fee for each uplink, which should encourage them to
use their own networking equipment.

Now if most of your customers are using < 1/2 rack, and aren't setting
up their own network equipment, and you're managing everything for them,
then you might want to put 1 48 port / 2 24 port switch in each
individual rack, with two uplinks from some central aggregation switches
to each.

I really don't think you want more than 4-6 cables going to any one
rack.

Maybe you can clarify your typical customer setup?

> Any standards?  Best practices?  Suggestions?  Resources, in the
> form of books, web pages, RFCs, or white papers?

I think the best thing is just to look around as much as possible, and
then see what works (and doesn't work) for you. I think some of the
manufacturers of cable, cable management equipment and stuff may publish
some standards / guidelines as well.

w




More information about the NANOG mailing list