Data Center Wiring Standards
blakjak at blakjak.net
Sat Sep 9 01:23:59 UTC 2006
> My thoughts go like this: We put a patch panel in each rack. Each of
> these patch panels is permanently (more or less) wired to a patch panel in
> our main patch cabinet. So, essentially what you've got is a main patch
> cabinet with a patch panel that corresponds to a patch panel in each other
> cabinet. Making connection is cinchy and only requires 3-6 foot
> off-the-shelf cables.
> Does that sound more correct?
> I talked to someone else in the office here, and they believe that they've
> seen it done with a switch in each cabinet, although they couldn't
> remember is there was a patch panel as well. If you're running 802.1q
> trunks between a bunch of switches (no patch-panels needed), I can see
> that working too, I suppose.
> Any standards? Best practices? Suggestions? Resources, in the form of
> books, web pages, RFCs, or white papers?
Theres a series of ISO Standard for data cabling but nothing is yet set in
stone around datacentres. I think the issue of Standards in datacentres
was touched on here some time back?
Ok, a quick google later,
TIA-942 Telecommunications Infrastructure Standards for Data Centres
covers off a lot of the details. Its pretty new and I don't know if its
fully ratified yet?
Based on existing cabling standards, TIA-942 covers cabling distances,
pathways and labeling requirements, but also touches upon site selection,
demarcation points, building security and electrical considerations. As
the first standard to specifically address data centres, TIA-942 is a
valuable tool for the proper design, installation and management of data
The standard provides specifications for pathways, spaces and cabling
media, recognizing copper cabling, multi-mode and single-mode fiber, and
75-ohm coaxial cable. However, much of TIA-942 deals with facility
specifications. For each space within a data centre, the standard defines
equipment planning and placement based on a hierarchical star topology for
backbone and horizontal cabling. The standard also includes specifications
for arranging equipment and racks in an alternating pattern to create
ìhotî and ìcoldî aisles, which helps airflow and cooling efficiency.
To assist in the design of a new data centre and to evaluate the
reliability of an existing data centre, TIA-942 incorporates a tier
classification, with each tier outlining guidelines for equipment, power,
cooling and redundant components. These guide-lines are then tied to
expectations for the data centre to maintain service without interruption.
The source url for the above was
You may like to see if you can track down a copy of the referenced
From my personal POV -
You have a couple of options depending on your switching infrastructure
and required cabling density - and bandwidth requirements. One way would
be to have a decent switch at the top of each cabinet along with a Fibre
tie to your core patch / switching cabinet. All devices in that rack feed
into the local switch, which could be VLAN'd as required to cater for ILO
or any other IP management requirements. Uplink would be a trunk of
1000SX, 1000LX, MultiLink Trunk combinations of same, or perhaps even
The other option would be to preconfigure each rack with a coupla
rackunits of fixed copper or fibre ties to a core cabinet and just patch
things around as you need to. Useful if you are in a situation where
bringing as much as possible direct into your core switch is appropriate,
and cheaper from a network hardware pov - if not from a structure cabling
Good luck. I know what a prick it is to inhereit someone elses shoddy
cable work - I find myself accumulating lots of after-hours overtime,
involving essentially ripping out everything and putting it all back
_tidily_ - and hoping that I don't overlook some un-documented
More information about the NANOG