Comparing various network cable types for your data center

Choosing the right network cable type is a big part of an effective data center cabling infrastructure, and quantity and quality are key points to consider when choosing.

This is the second of a two-part series on data center cabling. Read the first tip on implementing a data center

cabling infrastructure.

It’s a given that data center cabling carries the computing network. But there are many out-of-band connections to consider that, although connected to the network, are not intrinsic to the network that carries computational data. Ancillary “out-of-band” connections may also be required for monitoring power, temperature, humidity, air conditioner and uninterruptible power supply (UPS) performance, surveillance cameras, server auto-shutdown, water leaks and many other functions that aren’t always IP-based.

Unlike servers, storage and network switches, these ancillary devices are generally low bandwidth and relatively non-critical. Does that mean that different network cable types should be used for each device? It’s partly dependant on cable topology, which dictates where copper and fiber are best employed. But for copper cable, it’s almost never cost-effective to use different Telecommunications Industry Association (TIA) category cables for different systems. A homogeneous cable plant installs quicker, uses the same termination hardware throughout and makes every cable usable for all purposes. And since this type of plant uses the same patch cord types throughout, there should be no possibility of using a low-grade patch for a high-performance connection.

Network cable types: The ins and outs of fiber optic cable
Fiber optic cable is a different animal. There’s a big distinction between single-mode and multi-mode fiber. With the advent of the practical vertical cavity surface-emitting laser (VCSEL) in 1988, the development of laser-optimized multimode fiber (LOMMF) and the more recent adoption of OM4 fiber standards, high-speed fiber connectivity is economically realistic without needing to resort to single-mode, particularly over short distances that are normal to data centers. For now, there’s no indication that bandwidth will be needed inside the data center beyond what can be supported by LOMMF and VCSELs. But although high-grade multi-mode can handle virtually anything in the data center, it’s always concerning that bandwidth demand might exceed its capabilities, particularly in the network backbone.

Therefore, there’s a common tendency to install at least some single-mode fiber to be safe. After all, the actual fiber is pretty inexpensive; it’s the interface electronics that aren’t cheap. Many people often think that even if you never use it, single-mode is cheap insurance, and big investments in electronics won’t be made unless needed. It’s still money, however, and the more we move toward a mostly fiber infrastructure, the more unused fiber you could be putting in if following that a certain percentage must be single-mode.

Single-mode fiber is primarily useful for maintaining bandwidth over long distances. Therefore, over the relatively short distances normal to data centers, it may even be necessary to use in-line optical attenuators or a lot of coiled-up cable length to keep receivers from being swamped by the high-power laser launch devices. Lasers also use more power, which can result in unnecessary inefficiency in a large data center. Having single-mode strands in the primary trunking paths or for network connectivity outside the data center may help down the road. But unless prices for interface electronics decline substantially, single-mode will be costly. Use single-mode between data centers in different buildings or more distant areas of the building. Use high-performance multi-mode fiber for your high-bandwidth server and main backbone connections.

The importance of quality
Since we still need copper for many individual server connections, it only makes sense to use the best copper available to extend the installation life. This doesn’t just apply to cable -- patch panels, connectors, terminations and patch cords should all be selected similarly. The installation must be properly tested and results should be reviewed against specified performance standards. There’s no point in paying for a high-performance cable just to have it marred by a sloppy installation. If paying for the best network cable types, you should demand the best results, and that means using only compatible patch cords to ensure you don’t degrade your investment. Amazingly, using incompatible patch cords is a common operational mistake. Old, low-performance patch cords lie around because they still work, yet prevent expensive new hardware from performing as expected.

Another growing issue is the fabrication of copper and fiber high-performance cables. Tolerances have become too critical, small errors too degrading and fiber connector densities too high to field assemble these cables in any lengths. To maintain the cable infrastructure quality and meet today’s performance requirements, no one should fabricate his own patch cords. However, an increasing number of full cable assemblies are now being prefabricated to length at the factory, in both copper and fiber network cable types. This ensures quality, makes it easier to add cable in the future and relieves the concern of underestimating cable quantities -- it’s now easy to add more when needed.

Whether using copper or fiber for network cable types, the data center must have the best and fastest cabling. It shouldn’t matter what’s used in the rest of the building; the data center is special and expensive. And even if maximum performance isn’t needed on Day 1, it most likely will be over the life of the facility. That’s why we see a higher ratio of fiber to copper in new data center designs, and why installing less than the best to save money, be it copper or fiber, is a poor economic decision. Replacing a cable plant is expensive, potentially disruptive and should be avoided for as long as possible. The following table illustrates how quickly cable performance has changed:
 

 

YEAR

STANDARD

I.D.

CABLE

SPEED

1990 IEEE 802.3i 10Base-T Cat. 3 UTP 10MBit/Sec.
1991 ANSI/EIA/TIA 568   Cat. 3 UTP  
1992 TSB 36   Cat 4 & 5 UTP  
1993 IEEE 802.3j 10Base-F MM Fiber 10MBit/Sec
1995 IEEE 802.3u 100Base-TX 2 Pair Cat. 5 100MBit/Sec.
1995 IEEE 802.3u 100Base-T4 4 Pair Cat. 5 100MBit/Sec.
1995 IEEE 802.3u 100Base FX MM Fiber 100MBit/Sec.
1998 IEEE 802.3ab 1000Base-T Cat.5 UTP  
2001 ANSI/EIA/TIA 568-B.2   Cat. 5e 1 GBit/Sec.
2002 ANSI/EIA/TIA 568-B.2-1   Cat. 6 10 GBit/Sec.
2002 ISO/IEC 11801 OM1 MM Fiber  
2003 IEEE 802.3ae 10GBase-SR, -LR, -ER, -SW, -LW -EW LOMM Fiber 10 GBit/Sec.
2008 ANSI/EIA/TIA 568-B.2-10   Cat. 6A  
2009/2010 IEEE 803.3ba
TIA-492-AAAD
OM4 LOMM or SM Fiber 40 GBit/Sec
100 GBit/Sec.
2010 IEEE 803.3ba   4 Pair Cat. 6A UTP 40 GBit/Sec.
2010 IEEE 803.3ba   10 Pair Cat. 6A UTP 100 GBit/Sec.

Table 1: Chronology of major cable technology developments
 

Physical design and quantities
There are four major contributors to data center cabling challenges today:

  1. Multiple network connections from each server -- some copper, some fiber
  2. Network switches with higher port count densities
  3. Differing storage topologies depending on manufacturer and protocol
  4. Changing cable standards to meet demands for ever higher speed

End-of-row consolidation addresses most of these needs with two drawbacks -- deciding how much cable to install in each cabinet and the size and cost of the server access consolidation switches.

A standard cabinet can hold 42 1U servers, and each server can have three or more connections. There can also be power and temperature monitoring in the cabinet and cipher locks for security. Should there be six 24-port patch panels in every cabinet to support the highest possible number of connections? Not likely, but there’s no way to accurately predict the number of connections ultimately needed in every cabinet, and it’s restrictive to designate cabinets for specific purposes and cable them differently.

It’s popular to pick a realistic “middle number,” which usually entails installing more cable than required. That can be expensive and hard to justify, but it’s still cheaper than the cost of redundant, chassis-type access switches in each cabinet row with enough ports to match the cable count. Virtualization and consolidation can even exacerbate the situation by creating higher server and cable densities.

Moving to top-of-cabinet consolidation is more flexible than end-of-row because it’s relatively economical to put in empty fiber light boxes to fill only as necessary. Whether you have LT fiber connections, local switches or pre-terminated cable with Multi-fiber Push-On (MPO) connectors, pre-terminated fiber can be added quickly and easily without the mess accompanying field installation and termination of individual fibers. With 12 strands in a single connector, you can add a lot of capacity very quickly. And once lengths are determined by manufacturers looking at a data center scale drawing, ordering additional runs is quick and easy.

In the end, deciding on the data center cabling approach and density is always challenging. Too little cabling fails to support requirements, leading to ad-hoc cabling that grows and never goes away. But excessive cabling can become very expensive and difficult to justify. Modern approaches can simplify the problem, but it still takes thought and planning. Flexibility is one of the most important considerations of data center design, and cabling certainly must always be considered.

Douglas Smith, principal and manager of IT consulting, and Edward Ruggiero, senior associate at Shen Milsom & Wilke, contributed to this tip.

ABOUT THE AUTHOR: Robert McFarlane is a principal in charge of data center design for the international consulting firm Shen Milsom Wilke. McFarlane has spent more than 30 years in communications consulting, has experience in every segment of the data center industry and was a pioneer in developing the field of building cable design. McFarlane also teaches the data center facilities course in the Marist College Institute for Data Center Professionals program, is a data center power and cooling expert, is widely published and speaks at many industry seminars.

Ed Ruggiero is a senior consultant with Shen Milsom Wilke and holds BICSI's professional designation of Registered Communications Distribution Designer (RCDD).

Douglas Smith is a principal of Shen Milsom Wilke and manager of the IT practice. Smith is the senior network designer as well as a technical resource to the system integration teams.

This was first published in February 2011

Dig deeper on Data center network cabling

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

SearchWindowsServer

SearchEnterpriseLinux

SearchServerVirtualization

SearchCloudComputing

Close