News Stay informed about the latest enterprise technology news and product updates.

Ken Brill: Tune your data center engine

Ken Brill, founder and executive director of The Uptime Institute, talks about why the group's next symposium, scheduled for March in Orlando, Fla., will focus on data center efficiency rather than uptime, as well as how more cooling can lead to more hot spots and how a new server efficiency standard might be published.

Your symposium next year will focus on data center efficiency rather than uptime. Why the change?
We call it the invisible crisis in the data center. It's an economic crisis. It's resulting in people seeing hot spots, and they're running out of capacity. It's fundamentally an economic crisis.

Uptime Institute president Ken Brill
Ken Brill
What do you mean by economic crisis?
Historically, every time you bought a new computer, you got a 12-times increase in performance for the same amount of money. But now when you include the costs of power and cooling for a computer, instead of 12-times performance, you're getting a much smaller performance increase. Call it an economical productivity increase. Our focus in the symposium is how to restore that productivity. What is the 'site infrastructure energy efficiency ratio'?
It's the ratio of energy that comes into the data center to what is delivered to the computer equipment. In most places, for every 3W that comes into the building, only 1W is going to the computer equipment. What happens to the other 2W?
That's the cost of cooling and inefficient operations. So how can data centers improve their site infrastructure energy efficiency ratio?
The right answer is a whole series of things. The first one is bypass airflow. In most data centers, 60% of the air is not going where it needs to go. So reducing bypass airflow from 60% to 10% will result in getting the air to where it needs to go, and then that requires less cooling.

More on the Uptime Institute
Server power and cooling experts offer engineering insight

Uptime Institute warns against tier standard misuse 

Syska Hennessy proposes new data center performance metric
How do you reduce bypass airflow?
Seal the cable openings at the back of racks to prevent air from escaping. You take perforated tiles in the hot aisle and put them in the cold aisle. Those two actions in a typical data center – we've done a number of controlled studies where we've taken rooms where the average temperature is above 75 degrees and the hottest was 87 degrees to 90 degrees and reducing the bypass airflow, the average temperature came down to 70 degrees and the hottest came down to 74 degrees. And the customer canceled orders for more cooling units, which would have actually compounded the problems.

What many people do is if they have hot spots, they go buy more cooling, which in many cases can make things worse. It's kind of like buying a new car instead of tuning up the engine you've already got. Which is more important, data center efficiency or uptime?
I don't think they're necessarily in opposition. Places that have the worst energy efficiency ratios also typically have the worst hot spots. It's a matter of skill, it's a matter of engineering. There's a hidden amount of skill to running a facility, and we oftentimes don't recognize or appreciate what goes into achieving availability and efficiency. Can you give me an example of how data center efficiency and uptime are connected?
Sure. There was a data center, approximately 25 years old that always had problems with hot spots, which are predictors of poor reliability. By doing some rather obvious things, we were actually able to turn off cooling, and the number of hot spots went down. Energy efficiency improved.

You ask yourself, how could that be? You have hot spots, you reduce the amount of cooling, and the number of hot spots goes down and the stability of the room increases and the energy efficiency of the room improves? The answer is that cooling is more art than it is science today. The incentives are to run more cooling than they need, and although it's counterintuitive, more cooling does not necessarily mean fewer hot spots. The EPA [Environmental Protection Agency] is supporting a new server energy efficiency standard. Where do you think the results of these measurements should be published?
We would be interested in doing that, but we haven't figured out the economic model. There are three basic models. There is the 'Consumer Reports' subscription model. They buy products at the consumer level so you're not getting a hyped-up machine, and the consumer of the information pays for it. Because they pay for it, they feel a certain amount of integrity for the testing that's been done.

The second model is the manufacturers can run the testing on their own and publish the results, but then it's not all in the same place, and there's always the suspicion of whether the tests are rigged. In the middle would be tests the manufacturers paid for but which were run by a third party and available through the third party. Which method do you think is best?
I think the best way to do that is if it's user-paid-for. As long as it's vendor-paid-for, then it's always subject to some concern. Could Uptime do this?
We have the technical resources to do it, but is it something that's going to be financially viable? Personally, I think the 'Consumer Reports' approach is going to be the one that people are going to feel the most comfortable with, but how much will people pay for these reports?

Let us know what you think about the story; e-mail: Mark Fontecchio, News Writer

Dig Deeper on Data center design and facilities

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.