What's happening in data center energy management

Smart PDUs and DCIM turn on data center energy management, but that's the short-term stuff. A strong strategy will energize a data center perennially.

New equipment, tools and business strategies will rein in run-away IT energy costs, the biggest operating expense in the data center.

Data center energy management requires a multi-pronged approach: power-sipping hardware, tools that generate a clear depiction of energy use and expose areas where changes will pay off, and business strategies that bring in the right energy control products with the best return on investment (ROI).

It's impossible to determine energy efficiency without a usage baseline. Estimates based on power use measured at the building's meter are adequate for global assessments, like power usage effectiveness (PUE), but not to determine specific systems' efficiency. For example, a PUE of 1.2 suggests excellent energy efficiency because almost all power entering the building is used by the data center. However, if 50% of the servers are idle but powered on, that PUE belies a large sum of wasted energy.

TechWatch logo

Self-aware data center equipment

Today's enterprise-class lower-power servers tout processor performance (P-states) and operational states (C-states) that nearly shut down idle processor cores. IBM's System x, Dell's 12th-generation PowerEdge and other servers use thermal controls such as variable-speed cooling fans monitored by tachometers, multiple temperature measurement locations within the system and even continuous power monitoring that calculates and reports usage to compatible tools.

Power distribution units (PDUs) use intelligence to help organizations map energy draw. Networked PDUs like APC's Switched Rack AP8000 series and CyberPower's Monitored units offer real-time power monitoring and temperature/humidity sensing. They also control power at the outlet level for granular cycling of devices, such as individual rack servers. Smart PDUs pay off in data centers with numerous racks of equipment that demand granular monitoring and control; management tools process PDU data to analyze and report on power use and environmental conditions in the racks.

Uninterruptable power supply (UPS) systems that keep data center equipment running during utility power failures also extend energy efficiency and intelligence. For larger enterprises with extensive data centers, scalable UPS systems like Emerson Power's Liebert NX On-Line 225-600 kVA or GE's Redundant Parallel Architecture products represent two emerging approaches where battery capacity ramps up incrementally as load increases -- UPS systems can be arranged in parallel to support extended loads. Matching the battery count to the load reduces the energy wasted on charging extra batteries.

Enterprise data centers are also revisiting the notion of standby- or Eco-mode for energy management. Rather than converting AC-DC-AC -- a less-efficient double conversion -- the UPS runs equipment from utility power and switches to UPS milliseconds after utility power fails. Modern UPS systems also report readiness, battery status, load and other operating conditions to monitoring and management software.

Critically thinking DCIM

Data center energy management products have evolved to a category called data center infrastructure management (DCIM) -- software that reports granularly from the data center facility to the server and device level.

DCIM does more than energy monitoring. As IT equipment communicates status and performance information, DCIM tools assist IT professionals with capacity planning, system inventory control and lifecycle management, workload balancing and server consolidation -- including powering down idle servers -- monitoring and improving system resilience, and other insight-based initiatives.

DCIM software can idle and power down equipment not needed for the data center's current workloads. It also identifies power-hungry systems ripe for a technology refresh, either replaced with a more energy-efficient model or decommissioned via workload redistribution or consolidation -- even migration to hosted cloud. Tools can also correlate data center temperature and humidity levels with system activity and energy use over time for informed capacity planning decisions.

To cope with unruly energy usage, look for features like power capacity forecasts and capacity modeling when data center conditions change, as well as energy chargeback billing for internal departments that use computing. These allow the business to estimate the power, cooling, space and network resources for new systems, or calculate the carbon footprint for each energy subsystem.

DCIM tool choice is not a matter of data center size, but rather the software's compatibility with current and future data center equipment and a feature set that meets your business goals. While any enterprise can adopt DCIM, the overhead required tends to dictate users run several hundred servers with a full-featured data center. Smaller organizations can benefit from narrower point-solution tools.

Homogeneous data centers have server-specific management tool options. For example, IBM System x works with IBM Systems Director for electrical and thermal monitoring and control -- among many other tasks. Dell's PowerEdge portfolio uses the OpenManage systems management platform with hardware-integrated Dell Remote Access Controllers. Hewlett-Packard ProLiant Gen8 servers, storage and networking products interface with HP Systems Insight Manager software.

Dozens of third-party energy and infrastructure management tools meet data center needs, including Cisco's EnergyWise Suite, Raritan's Power IQ -- a component of the vendor's DCIM suite -- Schneider Electric's APC StruxureWare for Data Centers, and Nlyte 7.5 from Nlyte Software.

DCIM adoption will feed technology refreshes, which in turn enhance DCIM features by adding more DCIM-compliant equipment and sensors.

Long-term energy strategies

Most organizations can rely on new systems and software over the short term for data center energy management, but maximizing efficiency requires a long-term strategy and educated goals.

Many organizations consolidate hardware via server virtualization and can reach higher virtualization percentages. By enabling one physical server to host multiple workloads, a business slashes its server count and reduces energy demands for systems and cooling. However, virtualization requires an investment in hypervisors such as VMware vSphere and Microsoft Hyper-V and virtualization-specific IT expertise. Virtualization is key to energy efficiency and workload mobility, but don't expect results overnight.

An emerging energy-efficiency strategy is relocating workloads to remote public cloud or hosted colocation providers. Suitable migrations reduce server counts and ease energy and cooling demands, but it's not for every workload -- mission-critical, geographically regulated, high-security or difficult to recode/re-architect workloads stay on servers. With outsourcing, each technology refresh cycle means fewer new server purchases. Large-scale enterprise data centers experience measurable energy savings as well.

In large data center facilities, several infrastructure improvements will boost energy efficiency by minimizing the number of voltage/current conversions between the utility and IT systems. For example, some organizations opt to use higher operating voltages, such as 208 Volts AC rather than the traditional domestic U.S. 120 Volts AC or the higher European 415 V/240 V power (400 Volts). Higher operating voltages require fewer step-down conversions. Another approach uses a single uniform DC power source to specially designed equipment based on open standards like Facebook's Open Compute Project. This allows a single AC-to-DC conversion and eliminates servers' many individual power supplies. UPS systems can deliver DC from batteries directly to the DC power distribution system, eliminating another conversion. However, the shift from AC- to DC-powered equipment requires a complete refit of servers and systems.

Energy costs vary widely, anywhere from 3 cents to 30 cents per kWh depending on location, season, source and demand. Look into purchasing energy from diverse providers or regional power farms -- such as a local wind or solar farm. In colocation and hosting scenarios, service providers can aggregate power costs, or colocation customers can move workloads between locations for the best deal on power.

No power strategy is complete without considering backup generators. The expense, regular testing, switchover, and maintenance -- not to mention the pollution -- of diesel generators make solid oxide fuel cell generators such as Bloom Energy Servers attractive for future backup power. Fuel cells use natural gas or a variety of renewable biofuels to produce electricity. For large operations, consider moving to fuel cell generators as the full-time primary power source and leaving utility power as the backup.

Business goals will help shape which of these long-term strategies makes sense for your data center. Calculate the ROI on any IT energy management initiative, particularly the more disruptive changes. For example, don't build a new state-of-the-art data center facility if the company plans to shed its IT workloads to private cloud providers. Don't deploy local Bloom Energy Servers with an 8.6 year ROI if you expect to break ground on a new remote facility in just a few years. And a long-term energy purchase contract won't save money if it doesn't account for the IT team's massive server virtualization and consolidation project.

This was first published in June 2014

Dig deeper on Data center energy efficiency

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

SearchWindowsServer

SearchEnterpriseLinux

SearchServerVirtualization

SearchCloudComputing

Close