Any machine using more cycles is going to use more power, assuming that the machine has some throttling mechanism to slow down when it's not being used, which most of the PCs and even the mainframe have. There is probably more you can do with graphics in Vista that draws a little more power, but that's a nit compared to some of the other issues concerning data center power consumption.
The difference in applications is really a red herring. One of the real culprits is the people who just leave their machines sitting there all day long not doing anything. In other words, what you're really talking about is the percentage of utilization.
The problem you have in the PC server world is that the average utilization is so low that multiple machines need to be running in order to keep up with computing demand. The average utilization on a PC server is around 7%. If that kind of utilization can be justified to your power bill-paying CEO then more power to you.
However, you might consider that the difference in power usage between running at 7 and 100% is very low because most of the power goes just to keeping the processor running, the fans running, the disc spinning, etc. These mechanisms are using power whether utilization is up or not.
That is not to say that only PCs have such utility issues. All servers need air-conditioning and other energy-absorbing peripherals. The mainframe is not going to be different energy-wise in that regard. The bottom line is that especially in the low-end PC server market, a lot of power is being consumed even though they're sitting there idling most of the time.
Scaling up: Mainframes, high-end servers deliver higher efficiency
Average utilization for the mainframes is somewhere around 90%. If you look at power per work being done, you're more efficient on the mainframe. Any of the larger servers that are designed to run multiple jobs, whether it's a mainframe or Unix box or other high-end server, are going to be more effective power-wise than what you're going to find on an Intel or AMD –based server. That's because low-end machines aren't designed for the same kind of workloads; most of the time they're just sitting there using power and not being very useful.
I was talking to some people in the Intel server market who said that when you're down in the smaller servers, all the fans are lower quality fans. They don't use ball-bearing fans. The power supplies aren't as efficient. The lower end server market, the mass part of the market, are so price sensitive that you can't afford to make them more efficient. It's understandable. There are potential power savings with scale-up.
Scale up only goes so far
When you're talking about the big high-end servers, mainframes, whatever, they're all trying to be efficient. You're not going to find a whole lot of difference between those machines. The real problem that we're all facing is that we need to find a way to use less power.
We're going to get killed by our top management because we're the big power hogs. The cost of energy is going up so dramatically and when the bill shows up on the CFO and CEO's desk they're going to tell us to use 20% less power -- but still do the same amount of computing. The other thing that is going to kill us is storage. We keep adding more and more storage. Running all of those discs and motors are sucking up power.
So, how do we use less energy? Some things are obvious: use the most efficient power supplies you can because the conversion to DC power costs you energy; use the most efficient chips available; etc.
SPEC has created a committee to develop a power rating benchmark. You'll be able to compare machines' power. Based on the workloads you'll be running, you can determine how much power each system will use. Now you can compare price point costs versus operating costs. I think that people will start paying attention to things like that and possibly reverse the habit of buying cheaper servers.
The future of data center power efficiency
You have to look at the total picture. For example, using alcohol in cars is compelling because it's renewable. But it takes energy, water and fertilizer to grow the corn and convert it to a usable fuel. Some of these things are energy negative. It takes more energy to generate a power source than the power source can deliver. If I do a direct hydrogen conversion using current technology, I use more energy than just using the electricity in the first place.
I think that data centers in the future are going to use combinations of things. There might be breakthroughs in solar cells that enable data centers in sunny areas to operate efficiently. Flat, windy areas might find it feasible to invest in wind energy.
The future will be multiple sources, more energy efficient computing and energy recovery. What do you do with the heat that data centers generate? Most places today use the AC to cool it, so you're just wasting this heat energy. Years ago I toured a data center where they blew all that hot air out and used it to heat the rest of the building. It's a really sensible idea, but it begs other questions that businesses need to address. For example, how do you retrofit an existing data center to efficiently recover energy? How can we make this economical?
There are companies out there whose business is helping companies become more efficient. One I spoke with offers an interesting business model. They monitor your company's energy usage, make changes to make it more efficient (e.g., replace incandescent lamps with fluorescents), all at no up front cost. Then you pay them a percentage of your energy savings.
Beyond energy efficiency in the data center
Another part of what data centers are facing is physical space. People are not only running out of space for their equipment, they're also running out of places to build new data centers. Some companies are buying the land now, even though there aren't immediate plans to build new data centers, because they're afraid that they won't be able to get it when they need it. They're also buying the land where power is less expensive.
What is driving the land rush? I think that it's a genuine concern for the ability to find the land to build the big data centers that they need. It's a resource that you can't get more of. You can buy new servers because manufacturers are going to keep making them. But nobody is making new land. I think that this is something we'll be hearing more about in the future.
ABOUT THE AUTHOR: Robert Rosen is the immediate past president of SHARE Inc. Currently, he serves as the CIO at the National Institute of Arthritis and Musculoskeletal and Skin Diseases of the National Institutes of Health, US Department of Health and Human Services.