Power-saving technologies in the data center

With data centers exceeding watts capacity, many are looking at new technologies to keep energy demand from spiraling out of control. IT pros are exploring virtualization, multi-core chips and DC-powered equipment to fight the problem.

This is the final article in the series on the price of power in the data center.

Tom Roberts, director of data center services for Novi, Mich.-based Trinity Information Services, has a problem with power.

His facility is only three and a half years old, but the watts per square foot are expanding so rapidly, he's afraid he's going to have to start knocking down walls to make room for the Intel servers he said are showing up at a rate of 10 a week.

"We'd planned for 50 to 70 watts per square foot and we're blowing past those numbers," Roberts said. "We'd planned for 20% growth per year [in electricity demand], but we're at 45% growth per year."

As a result, Roberts was forced to look at new ways to bring down his demand.

Roberts said virtualization, software that allows you to run multiple operating system images on a single machine, has really helped reduce electricity consumption. His organization recently applied software from EMC-subsidiary VMware to try to get more out of underutilized servers -- and Roberts said it is working.

At his data center, Roberts said he has a mixture of 750 Intel servers, mostly from Hewlett-Packard Co. His department conducted a study that said 80% of those servers were running at 5% to 15% utilization. But with VMware, now Roberts is collapsing 10-18 applications onto a single server and clustering them for failover protection, and using fewer servers.

But Roberts knows virtualization isn't a cure-all and he's looking at other technologies as well to shave some kilowatts.

The price of power

IT energy crisis reaching critical mass

Cooling quick fixes drain data centers

Raised floors and efficiency: Controlling cooling matters

IT pros hot under the collar over the price of power

Looking at the chips

Processor manufacturers have started addressing the problem of energy efficiency. New technologies, such as multi-core processors and built-in virtualization capabilities for the x86 space, are going to have an effect on processing power, and potentially energy efficiency.

The ability to put multiple cores on a single piece of silicon can give hardware a boost, allowing servers to get more bang for their buck at a lower form factor. It's a technology that has been around for a few years on Sun's and IBM's high-end offerings, but it has trickled down to the commodity server space over the last year.

Redirecting the heat

According to Robert Rosen, president of the IT users group SHARE, a lot of the onus for energy efficiency innovation is on the vendor, but there are steps IT pros can take to mitigate the problem.

Rosen said there is an interesting idea generating buzz in the IT community: Ducting hot air from the data center to heat other parts of the building.  

He said it's not going to cut down on the data center's power draw, but might save on the bottom line for the building overall.  

"It's the other side of the coin, recapturing the heat and using it somewhere else," Rosen said. "This is something you would want to do in the initial design, obviously."  

Will redirecting catch on? It could, but if you're ducting air out, you're going to have to bring air back in.  

You want data centers sealed off to prevent shifts in temperature and humidity, so if you start redirecting air to other parts of the building you're going to need some elaborate ducting, monitoring and controls, warned Tom Roberts, director of data center services at Trinity Information Services,  a Novi, Mich.-based healthcare IT operation.  

"It doesn't sound cheap; I don't know how much savings you're going to get," Roberts said.  

Will redirecting data center heat catch on? E-mail me and let me know what you think.

With multi-core technology, processors can run at a faster speed. If a new chip runs twice as fast as the old processor, even if it uses a little more electricity than the previous version, it's still using less power than two processors.

David and Goliath chip manufacturers Advanced Micro Devices and Intel have been battling on a new front: performance per watt. The companies are looking to offer a way to measure efficiency, and not just processing speed.

But the numbers can be tricky in these situations. Roger L. Kay, founder and president of Massachusetts-based Endpoint Technologies Associates Inc., said power consumption is all over the map.

"The whole industry is working on it, but a lot of the information is anecdotal," Kay said.

And the math can be confusing, according to Kay. He has seen studies that say AMD's Opteron chips are using less power than Intel's offerings, but companies would need more Opterons to do the same amount of work. Companies would end up with more chips to do the same job and their spending more and using more power overall.

According to Charles King, principal analyst with Hayward, Calif.-based Pund-IT Research, the problem with measuring apples-to-apples on chip efficiency is that different applications demand different power draws.

"It's OK to speak about efficiency with a broad brush, but you can't talk about it in terms of executed demands," King said.

King pointed out that dual-core chips can increase the capacity of the processor, taking less time to complete transactions, and there is the potential for savings there. But he said virtualization is going to have a larger impact on utilization, and AMD and Intel are building virtualization functions into their chips.

But King said there's no reason to wait around for AMD and Intel to build virtualization into chips when there are companies like VMware virtualizing x86 servers now.

AC/DC debate out front

With power issues on users' minds, the idea of direct current-powered equipment is getting play in some larger data centers.

Electricity comes from the utility in alternating current (AC). AC is the form in which power is more easily distributed. AC is converted to DC at the power distribution unit, then converted to AC again to push out to the servers, and is converted one more time to DC at each individual server.

In a DC power distribution system, there is only one conversion from the main building AC to DC at common power supply for multiple pieces of equipment. The DC power is fed directly to the servers and switches, and also keeps the batteries charged. This has been the approach in the telco industry for decades on high-end PBX equipment.

DC-powered servers don't have power supplies built in. The power supply is in the infrastructure. Taking the power supplies out of the equipment itself and putting it in the rack reduces space and takes away the need for an extra conversion.

Milpitas, Calif.-based Rackable Systems is an expert in this area, where there are not many vendors. And according to Rackable, customers are saving anywhere from 20% during usage to 50% at idle.

There are many drawbacks for this technology. First, the investment in a DC power distribution plant only makes sense for large data centers. Second, it's a technology that many are unfamiliar with.

Bob Doherty, data center manager at Beth Israel Deaconess Medical Center in Boston and a board member on AFCOM's Data Center Institute, has been investigating DC power systems, but is leery of taking one on in his facility.

"Research facilities, scientific environments -- places with more engineering expertise than the average data center -- are having success," Doherty said. "It's been said that DC power will save 25% to 50% off my bill, and that's really intriguing, especially since my electric company just announced a 27% increase in price. But I'm fearful [of DC systems] from the standpoint of outages. Are you going to hire a DC power engineer to help me with this system? Now I have a totally new environment -- I have no one to turn to, and that would scare the hell out of me."

But according to Tuck Wilson, facility director at the State of Washington's Department of Information Services in Olympia, if you're looking at a major data center, you're going to have to hire an engineer anyway, so that shouldn't hold people back from DC-powered equipment.

Wilson said he'd been reading about the pending cost increases for the price of power and saving a large percentage of the bill would be a significant amount. "If it saves 25% off the bill, you could certainly pay for the engineering and the new equipment," Wilson said.

He noted that phone systems had been using DC power for years, and the major drawback he saw for the existing data center was not having any plug-and-play options between existing servers and new ones.

Roberts has a full-time engineer at his facility investigating the technology at this time, and he agreed that there would be hurdles to jump. "But from the power standpoint alone, it's a huge gain, avoiding the power loss in transmission," he said. "We'll look into anything that can save kilowatts."

Let us know what you think about the story; e-mail: Matt Stansberry, News Editor

Dig deeper on Data center backup power and power distribution

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchWindowsServer

SearchEnterpriseLinux

SearchServerVirtualization

SearchCloudComputing

Close