Today's data centers face a very real and pressing problem: the likelihood of running out of power. The culprits...
are servers and switches that, while much smaller than their predecessors, also draw in exponentially more power. Packing more of this high-density computing gear into a smaller space can bring power density and heat to critical levels, often resulting in an inability to effectively cool existing equipment, serious system malfunctions and an imperative not to plug anything else in.
The data center power problem has intensified in recent years and will likely continue to worsen. In 1998, heat loads for dense rack-mount servers hovered around 5,000 W per rack, and by 2006 it increased to 32,000 W per rack. The American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) has predicted power densities of 42,000 W per rack by 2014. As a result of these heat issues, more and more data centers are not fully populated from a physical standpoint, but still can't deliver enough power to meet equipment requirements.
What can organizations do to prevent a data center power crisis? In this tip, I'll offer suggestions for making sure your data center has enough power to operate at optimal capacity while keeping heat levels down.
Consider server consolidation. It has been found that most servers only utilize about 21% of their total capacity. Server consolidation through virtualization can significantly reduce energy demands and power consumption as well as bring significant savings in the cost of hardware, maintenance, support services and data center floor space. Virtualization also allows for quicker and more efficient deployment of applications without outages or service interruptions.
Another option for data center consolidation is using storage area network (SAN) technology to replace traditional servers with internal and attached disks. Like virtualization, this approach translates into significant savings due to the ability to dynamically allocate additional storage capacity without service interruption.
Assess power and cooling capabilities. Conduct an inventory of your data center's power distribution system and cooling capabilities, establishing where all components are in their lifecycle of usefulness. Data center managers often find that inefficient equipment causes up to 50% of the energy they pay for to actually be dissipated as heat. In addition, many data centers are using inefficient, outdated equipment.
For example, most data centers' uninterruptable power supply (UPS) systems are older, less efficient varieties that have likely been in service for at least 10 years. Newer, updated UPS systems are much more efficient, and employ management and administrative tools to allow data center personnel to react to issues affecting their UPS and internal components. They are also capable of managing battery life better than their predecessors.
Optimize your data center design. The traditional data center environment has a raised floor that is statically cooled. This format worked well for the lower-density servers of the past, but today's high-density blade servers often create severe hot spots in these traditional, outdated environments. A majority of data centers simply don't have enough power or cooling to accommodate newer, high-density technologies. Data center operations managers are challenged with finding additional capacity to meet their needs, and must either bring more power into their data centers or decommission existing systems to make capacity available for newer, more efficient systems.
Use data center space wisely. Examine physical space planning methodologies and update your data center's physical footprint to match best practices for rack placement, equipment zoning and centralized distribution of media. Using a zoning methodology is a more efficient use of space and allows for the ideal amount of power and cooling to be delivered to the appropriate area. This approach involves segmenting your data center into zones based on the types of equipment that will be deployed.
Turn down the heat. Installing products such as chilled-water cooling units or air conditioners, and integrating a hot and cold air stream or air containment system, will ensure that temperatures remain at sensible levels. These products and systems will also prevent exhaust air recirculation from damaging sensitive IT equipment. Cooling a data center and improving its airflow extends the facility's useful life, increases IT equipment reliability and reduces downtime. Many data center cooling systems and products available today are low cost and energy efficient.
Taking these five simple suggestions into consideration will improve your data center's agility, make IT more responsive to the needs of the business and minimize the risk of experiencing a power crisis.
ABOUT THE AUTHOR: Vincent Minerva is a senior data center services consultant at Dimension Data Americas. He has prior experience working in data center implementation, vendor management, infrastructure support and recovery management.
What did you think of this feature? Write to SearchDataCenter.com's Matt Stansberry about your data center concerns at email@example.com.