BACKGROUND IMAGE: iSTOCK/GETTY IMAGES
An aging data center may no longer be able to meet the power, cooling and structural demands of advancing technologies, but few businesses have the time or the capital to build new facilities.
Fortunately, organizations can extend the working life of their data center by renovating the facility by making changes that cost little to nothing. Data center upgrades allow a business to adopt new standards and improve existing infrastructures to introduce new technologies with better performance and more efficiency.
There are five fixes and data center design changes that can extend the life of an aging facility.
Elevate your data center operating temperature
The data center's working temperature has long been a subject of myth and legend, but research and initiatives from industry organizations such as the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) have found that data centers don't need to be cooled like meat lockers. Modern servers and other computing equipment can operate reliably at elevated temperatures.
A 2008 ASHRAE document recommended a temperature range from 65 to 80 degrees Fahrenheit for Class 1 data center equipment. Recommendations in 2011 broadened the allowable temperature range from 59 to 90 degrees Fahrenheit for enterprise-class servers and 41 to 113 degrees Fahrenheit for appropriately designed servers and other equipment.
It costs nothing to turn up the data center's temperature and higher operating temperatures within ASHRAE standards. There are considerable wear and energy cost savings for existing mechanical cooling systems. In addition, the extended temperature range also makes it possible to adopt alternative or supplemental cooling schemes (at least during certain parts of the day), such as free air or air/water economizers -- cooling technologies that might not have even been considered when your data center was first built.
Upgrade servers and systems for better consolidation and efficiency
Servers consume the majority of energy in a data center -- primarily in the processors and memory components. Organizations can gain significant energy efficiency by upgrading servers to more efficient models during normal technology refresh cycles where capital is already budgeted, or even by adopting integrated infrastructures like Cisco's Unified Computing System (UCS).
"Vendors have made major strides in power consumption and operating parameters for blades, UCS and other infrastructure systems," said Chris Steffen, principal technical architect at Kroll Factual Data. "And nearly all [new servers] are significantly easier to manage than standalone pizza boxes of the early days of rackmount systems."
For example, an older server using an Intel Corp.'s dual-core Xeon 7130M will dissipate 150 W, but a server using a Xeon L7455 six-core processor will dissipate only up to 65 W. By upgrading the server, a data center gains four processor cores for more computing power, yet slashes processor power use by more than half. Newer servers also provide improved cooling parts such as variable-speed fans, and incorporate superior power-saving modes that reduce energy use and heating even further.
And there are other considerations. The newer server may also provide greater amounts of memory, allowing a virtualized server to provide much higher levels of consolidation than earlier servers. This means the same amount of computing work can be done with far fewer servers, saving equipment capital and generating only a fraction of the heat for a data center's cooling system to contend with.
Change the system layout and rack layout for power and cooling efficiency
Take a close look at the layout of your data center equipment and look for ways to improve power and cooling efficiency.
Suppose you had a traditional data center where a large computer room air-conditioning unit (CRAC) cooled the room. Now imagine that a server refresh and consolidation project slashed the number of servers by 75%. With just a quarter of the original server count in this example, it may be possible to rearrange the remaining servers in far fewer racks and use containment to enclose the remaining servers. This limits the air volume that must be cooled, significantly reducing the amount of mechanical cooling needed and allowing for alternative cooling technologies.
"There is more use of in-row solutions," said Robert McFarlane, principal at Shen Milsom and Wilke LLC, a consulting and technology design firm based in New York. "The cooling can be located where the problem actually is, [which] helps ensure that the renovation will end up doing what was intended."
In other cases, under-floor cooling may be more effective by reworking the electrical cabling, network cabling and water lines that cross below the floor.
A poorly designed and haphazard layout can obstruct cooling air distribution, making more work for the mechanical cooling unit. In addition, any water distribution increases the potential for damage to electrical and network wiring, so many organizations opt to route electrical and network wiring overhead -- leaving water lines under-floor -- and may even upgrade network cabling to allow for future bandwidth improvements.
Don't overlook the rack space itself. For example, fully populating racks can concentrate more equipment in less space, making any containment -- and associated cooling -- more effective. And some racks may not be deep enough to accommodate new generations of computing equipment. This can lead to wiring congestion and airflow problems.
"Many of the messes in data centers are because deeper equipment with far more cable connections just doesn't fit in the cabinets anymore," McFarlane said. "So the doors are open, cables are hanging out and airflow is blocked."
Consider supplemental or alternative cooling schemes
Mechanical heating, ventilation and air conditioning (HVAC) systems are a staple of the modern data center, but they are also costly, energy-hungry and a potential single-point of failure in data center availability. If the cooling system fails, a data center can overheat in a matter of minutes.
Data center renovations often focus on ways to supplement or replace traditional mechanical cooling with alternative equipment or methods that are enabled by higher operating temperatures, better containment and less equipment.
Popular alternative cooling approaches include chilled water heat exchangers (water economizers), evaporative cooling and even free air cooling (air economizers).
PTS Data Center Solutions Inc. is renovating its cooling method to a more efficient system.
"We're upgrading the cooling system to a very efficient chilled-water, in-row approach, said Peter Sacco, president of PTS.
These methods, however, require affordable environmental resources that are suited to the task and available for much of the day. For example, using cold lake water to drive a water economizer requires a nearby lake. In many cases, these alternative methods are added to supplement traditional HVAC, lowering run times and power needs.
Organizations that must continue using HVAC are taking a fresh look at the cooling system's capacity and efficiency. The potential problem is that a large, aging HVAC system runs even less efficiently if it is used infrequently; easing the cooling load on your legacy HVAC system might actually cost more and be harder on the mechanical system.
This means that raising operating temperatures and reducing the amount of computing equipment may justify a smaller cooling system.
"That usually means upgrading CRAC or [computer room air handler] to units with variable speed fans, and probably with electronically commutated motors and plug fans instead of centrifugal fans," McFarlane said. "Those will all make big improvements in energy efficiency and probably in cooling effectiveness as well."
Consider availability and reliability issues in power distribution
Data centers run the business and they must always be available, so consider backup power systems and the capacity of backup power equipment.
Upgrading the uninterruptable power supply (UPS) systems to a newer model can improve UPS energy efficiency and provide more intelligent power monitoring/measurement capabilities that complement a data center infrastructure management scheme.
"When a UPS is replaced, it is hopefully with a higher efficiency system, and may also become a redundant [N+1] configuration and possibly even a modular or incremental capacity solution," McFarlane said. He cautioned that power equipment upgrades may spawn broader wiring and distribution upgrades in older buildings.
It is also a common practice to upgrade in-rack power distribution units (PDUs) to add intelligent power management, along with rack temperature and humidity monitoring. With UPS and PDU upgrades together, an organization can gather energy use data and make more informed decisions about power costs in the data center.
Finally, consider the availability of data center power.
Organizations with aging, unreliable or overtaxed power grids may consider local cogeneration options to ensure uninterrupted power. Traditional diesel generators are quickly giving way to more efficient and environmentally friendly alternatives, including solid oxide fuel cells such as Bloom Energy Servers or solar arrays to produce some amount of local electricity. If it's not possible to install local cogeneration on-site, it may be possible to contract with regional cogeneration providers for supplemental electricity.
These are just a few of the most popular tactics available to renovate your aging data center, but there are countless other considerations, depending on your particular circumstances and business goals. Perhaps the most important (and overlooked) bit of advice for any data center renovation is to consider the services of a consultant with proven experience in extending the life of current IT facilities. Such experts can help you identify initiatives that will provide the greatest benefit for your business -- and navigate the even-more-complicated issues of permitting, code compliance and inspections during the renovation process.