News Stay informed about the latest enterprise technology news and product updates.

RMI: Reduce data center power consumption through better engineering

Amory Lovins, Chief Executive Officer of the Snowmass, Colo.-based Rocky Mountain Institute (RMI) says companies need to reduce data center power consumption through better engineering. Luckily, Lovins has several recommendations. RMI published an exhaustive report on data center efficiency back in 2003, back before data center power consumption was the huge problem it is today. SearchDataCenter.com interviewed Lovins after his keynote presentation at The Uptime Institute Symposium earlier this month.

Was data center efficiency as big an issue four years ago when RMI wrote the report? The far-sighted ones knew...

that the industry would have to evolve toward efficiency and reliability because they saw earlier than most where those curves were heading and you don't want to go there. I think Ken [Brill] is right to say, when we did the exercise in '03, the economic consequences of inefficiency of devices, software, servers, power supplies -- therefore data centers, hadn't become as unbearably obvious as it is now. Now that there are big dollar signs attached, everyone realizes we need a solution fast. Fortunately it exists. Can't you just stop the demand for CPUs? Nobody seems to think that's an option. Some of [the demand] is coming from what you can do with more bandwidth. As we go to advanced mobile services, fiber and fast networks being ubiquitous, a lot more people are streaming video for example. It isn't just about computations anymore. There is more eCommerce. There are more demanding calculations of all kinds. If we go to a model where software is not shrink-wrapped, but leased over the Web, there is still further demand for service.

Some of the extra demand for processor cycles comes from new applications. Some substantial amount, we don't know how much, from bloatware. If software were written to a higher standard, if more people really did hate bad code and stopped buying it, we'd need a lot less computation to do what we want to do. You mentioned Microsoft Windows Vista actually consumes more energy than Windows XP. Wouldn't you assume Microsoft would work to reduce the amount of cycles it takes to do work?
I'm not in a position to comment on Microsoft's practices or business model. I know their code is very complicated. I use a Mac. They are probably their own biggest customer. It would be interesting to know what their experience is.

I heard someone who knows the field better than I do comment that basic reforms in the terseness of code would be the biggest way on earth to save processor cycles. I don't know if that's correct, but it's a hypothesis that's worth exploring for them and for every other producer of code. We've tended to value good code for performance, in terms of getting a job done faster, or with less hardware. But we now also need to realize that it translates in to watts, which have costs in both dollars and climate. You mentioned using an air-side economizer in your presentation. Should everyone be looking at that method for data center cooling?
Sure. If you're in Singapore where it's 84% relative humidity and ranges from hot to broiling, it's not so exciting. But in most U.S. climates, you can get half, and in many cases three quarters, of your annual ton-hours from an air-side or water-side economizer or a combination of them. An air-side economizer is very cheap in capital cost and uses essentially no energy, just a tiny bit for controls. Water-side economizer, evaporative cooling with a cooling tower and heat exchanges in your chilled water loop, [costs] $100 per ton. If you design it very well, it gives you 100 or even 125 units of coefficient performance. You also mentioned using a slush pile to provide chilled water, can you explain that?
You don't need an engineering background to understand that if it's at least a few degrees below freezing you can make snow or actually slush. Not fluffy snow, you want something dense like sherbet. You can make a big mountain of the stuff and stick it on the ground or a hole in the ground. You get about 100 units or more of 32 degree melt-water harvested off the bottom in a liner and pumped through your data center for each unit of electricity that it takes to pump it and blow the snow. The capital cost is a few hundred dollars a ton. Another method of efficient cooling you mentioned called the Pennington Cycle uses a desiccant. Can you explain that process?
If you don't have the land or the winter cold to make ice, then you can cool with a desiccant that works best in very hot climates. You can use solar heat or the heat of your exhaust air to regenerate your desiccant. A desiccant is a substance that takes water out of the air, making the air hotter and drier. You can then add a little bit of water back into the hot, very dry air and make it much cooler and moderately moist, which is a good condition for a data center. You can mix that cool, somewhat moistened air with outside air directly or outside air that is cool but not moistened through an air-to-air heat exchanger. You get the mixture of coolness and moisture that you want with air going into your data center. Again, if you do this very well you can get about 100 units of cooling per unit of electricity. It's traditionally done with gas heat, but it's better to do it with free heat from your equipment. Trane has a desiccant that regenerates at 87 F. There are also some innovations coming in the servers themselves, for example Fibonacci spiral designed fans that move air much more efficiently. When do you expect those to have an impact on power consumption?
They're just coming out on the market. It's been in R&D for several years and I believe this is the year that the first fans and pumps come to market. I believe the computer muffin fans will be first. There are about a billion of those made a year and the new type of blade can actually be retrofitted into existing muffin fans and give you up to 30% more flow per watt or 10 dva less noise. The parent company that develops and licenses that company is PaxScientific.com. What do you think about the idea of hardening server equipment to withstand a greater range of temperatures, allowing data centers to use less cooling?
We looked at this for mainframe computers twenty years ago when people were setting extremely stringent temperature and humidity requirements. I called IBM and asked what's on your spec sheet for environmental conditions? They gave me very wide ranges. So I took them back to the first guy and said, I'm not sure where you got your assumptions, but this is what the manufacturer says this machine needs and this is how much money you would save if you just follow that spec.

But given what I've said about more efficient HVAC, I don't see why you should have to go to hardened equipment. You may want to do that in places with deficient infrastructure with major vulnerabilities of the electric grid. What is it going to take for server manufacturers to start standardizing equipment with the 80-plus power supplies?
Customer demand, the desire to leapfrog competitors. You can drive that switch from the vendor side or the customer side. When customers realize that heat equals downtime, ten Celsius degrees cooler doubles your meantime between hardware failures -- customers realize that saving a watt in the data center is around 20-30 bucks at six cents a kilowatt hour, they will tend to favor manufacturers with efficient equipment. And the power supply is in many ways the most important efficiency opportunity in the server because of the compounding heat and energy losses from it. Recent studies have shown that DC power in the data center can reduce energy consumption by 20%. You mentioned that Japanese data centers have been running DC power successfully for years. Why are they so far ahead in that regard?
The biggest builder of data centers [in Japan] is NTT facilities, and NTT is National Telecoms Company. The telecom tribe, which is culturally distinct from the data processing tribe, has always run on DC, typically 48 volts. When they started building data centers it was natural for them to transfer their DC bus expertise. We've been told, of course, by vendors of AC UPS systems that you can't really do DC. The bus bars have to be too big and heavy, it's unsafe, and you have to keep tightening the connections. But this doesn't seem to bother our Japanese friends or other industries that have run extremely large DC currents through their electric metallurgy equipment for about a century. That engineering is well worked out. It just isn't well known to the AC tribe. This is going to be an empirical question settled in the marketplace. I was just calling attention to the existing parallel universe that does not run on AC and has about an order of magnitude better uptime and much higher efficiency. What stops people from taking these recommendations to reduce data center power consumption?
Different people have different comfortable rates of change. If you're in a stove-piped organization where you see making a change as career risk and no reward, you're less likely to innovate than if you are in a learning organization where people talk to each other across departmental boundaries and all get around the same table solving their common problem. Otherwise it's easy to let your problem go on being somebody else's problem. If you're paid for maximizing uptime, but you don't pay the electric bill, you don't pay the capital cost, you have a skewed incentive. As long as we keep stove-piping this business we'll go on rewarding the wrong things and getting bad results. Conversely, the companies that learn how to create that vision across boundaries will win in the market. They will have much better computing with less energy, less capital cost and higher uptime.

I think the degree of complexity, cost, size weight, unreliability that now afflicts [data center] power and cooling systems has become unsupportable, especially combined with inefficient servers in the first place and inefficient software running them. It's time to take a fresh look at the whole thing and prune away the layers of complexity and get back to something simple that works.

Let us know what you think about this story. email Matt Stansberry, Site Editor
Check out the SearchDataCenter.com data center blog.

Dig Deeper on Data center design and facilities

PRO+

Content

Find more PRO+ content and other member only offers, here.

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchWindowsServer

SearchServerVirtualization

SearchCloudComputing

Close