For the past three years, green computing has been on the IT agenda, and many companies have implemented best practices...
to become more energy efficient. Data centers now use hot aisle/cold aisle and install blanking panels to avoid mixing air. IT pros have consolidated workloads with server virtualization and they've boosted server utilization from single-digit percentages.
Nonetheless, as CPU and storage demand outstrip progress, IT energy consumption grows unabated. Incremental improvements in cooling and server efficiency won't be enough to avoid the coming power and carbon crisis. This special report with links to resources helps explain why data center energy use is about to hit the wall and illustrates the next steps for companies to go beyond the basics. Additonal resources on data center power consumption can be found here.
TABLE OF CONTENTS
I. Data center energy hits the wall
II. Next steps in data center design:
III. What can IT do about the problem?
I. Data center energy hits the wall
Here are some statistics from a recent McKinsey & Co./Uptime Institute report on data center energy use:
- Data center energy use doubled between 2000 and 2006. And by 2012, it's expected to double again.
- For many industries, data centers are one of the largest sources of greenhouse gas emissions.
- Between now and 2010, U.S. demand for data center energy will require the equivalent of 10 new power plants.
These numbers might not mean much on paper, but these factors will coalesce into a spike in utility prices that will drastically change the patterns of our energy use. For starters, during the next presidential administration, climate change legislation and carbon emission regulation will be a reality in the U.S. Each candidate has pledged to enact global-warming legislation, and the bipartisan America's Climate Security Act of 2007 is currently in Senate committee and it could be voted on as soon as June 2008.
This legislation would impose a cap-and-trade system on utility companies, raising the price of power an estimated 20% across the board. Data center operators already constrained by huge power bills are in for even more sticker shock at the meter in the future. Click here for more on cap and trade's impact on the data center.
While proposed legislation would require utility companies only to track and reduce carbon emissions, some data center managers like Michael Manos at Microsoft expect the Environmental Protection Agency (EPA) to start monitoring the carbon emissions of all U.S. businesses.
"There are massive efforts afoot in government and regulatory agencies," Manos said in an interview at AFCOM. "The CEO of the company will have to start reporting carbon emissions and energy usage, and that [responsibility] is going to be shifted to the IT department that maintains the data centers. Most data center professionals haven't thought about this today. …It's not a question of if, but when, it is coming and what metrics will be required to report on this. It'd be far better for the people who run and operate data centers to come up with metrics that mean something."
And as mentioned, if the McKinsey/Uptime estimates are correct, the U.S. will need to build 10 new power plants over the next two years. The permitting process for these projects won't even be complete by 2010, let alone the power production. According to Andrew Fanara, who heads the Energy Star product team at the EPA, the data center industry could face a power shortage, which would increase energy costs further. Data centers are tethered to the grid -- and as energy issues become more challenging, there is risk for any business that depends on the grid. For more on the topic, listen to this podcast on energy security with Fanara.
II. Next steps in data center design
Faced with the following challenges, an incremental approach to data center energy efficiency isn't going to cut it -- which is why forward-thinking companies now architect radically different data centers from those of just a few years ago.
Beyond blanking panels: Ultra-efficient data center cooling. For years, hot-aisle/cold-aisle, raised-floor cooling has been the standard method of cooling servers. But forced-air cooling is one of the least efficient systems in data center infrastructure, and wasting cold air is the most common mistake in data center management. You can set up hot aisle/cold aisle, install blanking panels, and seal gaps in the floor, but you probably still waste cold air in places you wouldn't expect.
Some experts and data center pros have taken the same thermodynamic principles behind hot-aisle cold aisle to the extreme. One way to address wasted cold air is better containment either at the hot-air return or the cold aisle, using plenum systems to contain hot or cold air in a closed-loop system.
Jeff Lowenberg, the vice president of facilities at the Houston-based hosting company the Planet has taken steps to create a more isolated hot-aisle/cold-aisle design. He extended the return-air plenums on computer room air conditioning (CRAC) units. "Hot air naturally rises, so the CRAC units are sucking air in from the hottest part of the data center," Lowenberg explained. "By extending the plenums higher, it ensures that the CRAC units are not sucking in any cold air from the cold aisles, as it allows for the hottest air to be sucked into the units. In this scenario, the top of the plenums must be at least 2 feet from the ceiling."
While this system is not a closed loop, for the Planet's next data center buildout, Lowenberg plans to extend the plenums through the ceiling, using the area above the ceiling tiles as a return air path. Click here for photos and a full report on the Planet's data center operations.
Economizers gaining traction in data center industry. There are two methods for using outside-air temperatures to remove heat from your data center: air-side economizers, which actually bring cold outside air into the facility, and water-side economizers, which use outside air temperatures to replace chiller systems.
Unfortunately, most data centers don't take advantage of free cooling because they aren't located in a region that stays cold long enough for the system to pay for itself or they lack the automation to manage going on and off the plate-and-frame heat exchanger. Luckily the American Society of Heating Refrigerating and Air-Conditioning Engineers Technical Committee 9.9 has considered expanding the recommended temperature and humidity range for servers, allowing users to benefit from economizers during more days of the year.
Also, more companies are getting on board with economizers and proving they work. Foster City, Calif.-based hosting company Equinix Inc. touted economizers at the 2008 AFCOM Data Center World conference. And Joe Parrino, the data center manager of United Parcel Services of America Inc.'s Windward data center in Alpharetta, Ga., has also used water-side economizers with huge success.
Lastly, computational fluid dynamics (CFD) tools are becoming more prevalent in data centers and can have a huge impact on cooling efficiency. News writer Mark Fontecchio recently published a special report on CFD tools and current market offerings. SearchDataCenter.com also published a new study from IDC Architects outlining how to use CFD analysis on the outside of a building to evaluate whether economizers are a good fit for your data center.
III. What can IT do about the problem?
The IT administrator can have a huge impact on energy consumption, but there have to be incentives for IT to address the problem. So far, the incentives have been weak. In 2006, VMware Inc., partnered with California utility companies to offer rebates for server virtualization projects, but SearchServerVirtualization.com reports that the Northern California utility Pacific Gas & Electric Co. (PG&E) has given out only four rebates so far.
Implementing server virtualization can result in significant savings. Estimates by VMware and PG&E Co. state that direct energy savings for each server removed via server virtualization runs between $300 and $600 per year.
Looking for efficiency at the operating system level. IT pros also look for energy savings at the operating system level by shutting down servers when they're idle or running them more efficiently. Intel has partnered with the Linux community to launch the LessWatts.org site, which offers downloadable power diagnostic tools, white papers, frequently asked questions, and user-contributed tips and ideas. SearchEnterpriseLinux.com recently reported on the initiatives in the open source community on LessWatts.org.
Microsoft recently built new power management features into Windows Server 2008 to allow servers to throttle down when they're not fully utilized.
With all these individual operating system features becoming available, it's only logical that OS providers would begin competing on efficiency, but as news writer Pam Derringer discovered, pinning down energy-efficiency ratings on operating systems is trickier than it seems.
Specifying efficient servers. For the past several years, IT executives have had the idea that it's OK to continue buying cheap hardware. Meanwhile facilities executives face double-digit growth rates of energy-guzzling, poorly utilized servers showing up on raised floors. But now that capacity planners have hit a wall in data centers across the U.S., the paradigm is shifting.
When it comes to buying servers, performance is still king, followed by price. But energy efficiency is gaining ground in the conversation, thanks to new specifications that allow users to quantitatively measure performance per watt.
Just as the EPA has created Energy Star labels for desktop computers and ceiling fans, it is now finalizing a label for servers. Server energy consumption and performance would be measured and tested by third parties, and the top 25% of energy-efficient models would garner the Energy Star label. The determination would enable data center managers to weigh energy efficiency alongside other factors in their purchasing decisions. The EPA recently released its draft of the server specifications for its Energy Star program for servers.
Measurement: The holy grail of data center energy efficiency. What kind of metrics do you need to manage data center energy consumption? It doesn't have to be a 28-point formula. Experts advise data center managers to pick a simple measurement, create a ratio and improve on it.
"Pick something, any metric, and get started," said Pitt Turner, principal, senior project leader and president of Computer Site Engineering Inc. "Even if you pick the wrong metric -- as long as you improve it -- you can drive tremendous changes!"
There are a lot of potential variables to measure, which is where many data center managers get bogged down. But you don't need to swallow the ocean in your first attempt to gauge your data center's efficiency. You can take a simple approach to measure how much of your data center's power actually goes to IT equipment (i.e., servers, storage and network gear) and how much is sucked up by air conditioners and lost in AC/DC (alternating current/direct current) conversions at power equipment.
This measurement is expressed as a ratio and is most commonly known as power usage effectiveness (PUE) and has been accepted by vendor consortiums like the Green Grid, the EPA and ASHRAE as a useful figure to measure data center efficiency. PUE is determined by taking the power "in" to the data center (measured at the utility electric meter), divided by power "out" used to run the IT equipment for computing. The equation looks like this:
PUE = total facility power ÷ IT equipment power
For a complete rundown on how to start measuring your data center energy efficiency, check out Turner's webcast "Implementing energy-efficiency metrics in data centers."
Be a green hero. Once you've become a pro at managing IT energy use, you can sign up to help the EPA get a handle on data center energy consumption nationally. Check out the EPA's National Data Center Energy Efficiency Information Program.
Read more about green data center initiatives in the UK.
- Section I appendix
- What is hot aisle/cold aisle?
- McKinsey & Co./Uptime Institute report
- America's Climate Security Act of 2007
- Climate change legislation and the data center
- U.S. Environmental Protection Agency
- Microsoft exec discusses company's data center strategy
- The EPA's Energy Star program
Data centers and energy security podcast
- Section II appendix
- Interview with VP of facilities, the Planet
- The Planet saves money with extreme hot aisle/cold aisle practices
- ASHRAE Technical Committee 9.9
- ASHRAE to expand temperature and humidity recommendations
- Colocation company saves money using cooling economizers
- UPS' data center goes green
- The benefits of computational fluid dynamics in the data center
- IDC Architects
Outside air economizers in the data center
- Section III appendix
- VMware Inc.
- Companies yet to cash in on rebates for virtualization
- LessWatts.org: Saving power with Linux
- LessWatts forum offers energy-saving tips
- Microsoft: Windows Server 2008 will reduce power consumption
- Linux vs. Windows Server 2008: A power consumption test
- Energy Star specifications for servers released
- Computer Site Engineering Inc.
- The Green Grid
- Implementing energy-efficiency metrics in data centers
- EPA's National Data Center Energy Efficiency Information Program