Foxboro, Mass. -- Operating temperatures, capacity concerns and PUE are hot topics. Even when data center best practices exist in these areas, they aren't always obvious.
There was a lot of cooling, power, systems management and other best practices talk at the 2015 AFCOM Symposium this month. Here are five tips offered up by presenters and attendees to improve energy efficiency and data center design, and how to implement monitoring and management systems into existing facilities.
"Look for a DCIM system to talk to your existing control system."
David Plamondon, data center operations architect, University of Massachusetts Medical School
Fitting a data center infrastructure monitoring and management (DCIM) system with an existing building management system (BMS) can be challenging. But when full integration isn't an option, it's important to choose systems that will work together.
The UMass Medical School was unable to integrate DCIM into its existing BMS system in an older facility, Plamondon said. He suggests that when integration is not an option, find a DCIM system that shares all of the same data with your BMS. This way, data is available without the need for workarounds.
"Beware of partial or lowest PUE declarations."
Dennis Julian, principal director of engineering, Integrated Design Group, Inc.
When discussing how to design your data center for the future, Dennis Julian highlighted power and efficiency.
Julian discouraged equating power usage effectiveness (PUE) with sizing electrical systems. Depending on whether you have custom or commercial equipment, you can see a nine point improvement in a remote terminal unit system (RTU) PUE, he said. An RTU is a data center device that uses a form of power supply to collect data and transmit it to a central station. PUE points go up from neutral, 1.00, which indicates all the power consumed by the data center goes to IT equipment.
While the business may be concerned with lowering the PUE, a lower number does not represent total data center efficiency.
"PUE doesn't define maximum cooling capacity required," Julian said.
"Equipment can handle higher temperatures, we're just too afraid to test it."
Dennis Julian, Integrated Design Group, Inc.
No one wants to operate IT equipment at too hot a temperature; it's inefficient and may cause premature failure. So it has become common practice for data center staff to run their equipment at a lower temperature than necessary.
Hot ambient temperatures in the exhaust aisle, as high as 104 degrees Fahrenheit and 40 degrees Celsius, affect a lot of data center equipment beyond servers and switches, such as the smoke detectors and sprinkler heads, lighting systems, refrigerant-based cooling and cabling systems. Hot ambient temps also affect OSHA time limits for working in a facility, Julian said.
For cold aisles, Julian recommends a server inlet temperature in the mid-70s; At 75 degrees Fahrenheit, fan speeds increase, he said. Here, you operate with workable ambient temperature that is efficient, and not too hot.
"Some servers go absolutely crazy when you increase the operating temperature, with fans spinning wildly and using more power."
Victor Avelar, senior research analyst, Data Center Science Center, APC by Schneider Electric
Warmer operating temperatures are no blanket data center best practice, warned Avelar, citing his company's research on temperature and power use in three locations: Chicago, Seattle and Miami.
When you reduce chiller use, and fans ramp up, servers consume more power. You might lower facility PUE, because more of the energy going into the data center is used by the servers and not the chillers, but you aren't being more energy efficient, Avelar said.
Schneider Electric found two of the three scenarios cost more to operate at higher temperatures, with different approaches to running the fans. And hotter didn't mean better: The lowest total cost of ownership (TCO) scenario in Chicago was with inlet temps at 76 degrees Fahrenheit (24 degrees Celsius), and TCO increased along with the thermostat. The study also concluded that there isn't a good predictive model for failure rates on servers that operate in higher inlet temperatures for only some portion of the year, not year round.
"We used to think about data centers from the inside out, and now we reverse that."
Joseph Higgins, vice president of engineering, Fidelity Investments
The old way of building data centers was to construct a 100,000 square foot facility and slowly fill it, never to full capacity, Higgins said. It wastes money, time and resources.
In his keynote speech, Higgins addressed being responsive with capacity and not wasting energy. Fidelity Investments investigated a way to add capacity and avoid stranded assets. It adopted on-demand, responsive capacity via prefabricated data center space that is assembled on-site. Currently, Fidelity operates at a 12 megawatt data center footprint, and it is shrinking power use even as it grows capacity.
Sharon Zaharoff is the assistant site editor for SearchDataCenter.com. You can reach her on Twitter at @DataCenterTT.
Meredith Courtemanche is the senior site editor for SearchDataCenter. Follow @DataCenterTT for news and tips on data center IT and facilities.
More on the keynote address from Higgins
Strategies to help prepare for the future of data centers
Data center design best practices