The New York Times recent investigative series, The Cloud Factories, took aim at data centers, painting the industry...
with a wide brush as wasteful and as a hazard to the environment. Numerous members of the industry criticized the first article, claiming the Times and the reporter James Glanz ignored the last several years of advances in energy efficiency.
SearchDataCenter's facilities experts weighed in on the first Cloud Factories article, Power, Pollution and the Internet.
Robert McFarlane: There is, unfortunately, far too much truth in this article. But it is also one of those pieces that, when read by someone intimately knowledgeable in the field, makes one wonder about articles in other technical fields that, although obviously well-researched, are nevertheless still written by reporters not really schooled in the subject. There are a lot of truths -- in fact mostly truths -- but also some half-truths and distortions, and some out-of-date illustrations that are, therefore, highly misleading.
But what is undeniably true is two-fold:
- The majority of data centers, both old and new, continue to waste energy at a prodigious rate. There are many, including the big players that are unfortunately maligned in this article, who have made gigantic strides for both themselves and the industry in developing energy efficient designs and operations that would have been considered impossible just a few years ago, but they are still the exception, although not quite so rare of an exception as they once were.
- People are almost universally unaware of what it really costs to transmit, store and retrieve the incalculable amount of data they create when they widely distribute and replicate all the stuff we all now do on a daily basis.
The power-waste problem has three major causes. One is what the author notes as the running of processors 24/7/365 that do little to nothing. However, the writer fails to mention that most servers have the ability to run in energy-saving mode, which means the manufacturing industry really has tried to take steps to reduce the problem. But IT managers are afraid to invoke them for the very reasons he does mention -- the fear that something won't respond as quickly as expected if it takes a couple of extra nanoseconds for it to come up to full power. But there are two more big reasons that are, frankly, unforgivable:
- Most companies are still willing to consider energy conservation only if there is no additional capital cost. Long-term savings, both of energy and cost, may be given lip service, but in the end they often become much less important. This is partly due to our being held hostage to quarterly results, and partly because energy conservation is still driven in this country almost entirely by dollars rather than environmental responsibility. A high percentage of data centers still don't even know what their power consumption or costs are.
- Too many engineering firms don't really understand energy efficient data center design, which is different than the design of other energy efficient buildings, particularly where mission-critical operations are involved. There is no excuse today for the power and cooling infrastructures of new data centers being as energy inefficient as many of them are, even if their owners are unwilling to pay for the even greater efficiencies that could be achieved with today's equipment and techniques.
Changing the perception that data transmission is "free" is a matter of public education, which articles like this may at least start to do with a small number of people. But so long as there is still resistance even to using energy-efficient light bulbs in our homes, we are unlikely to limit our indulgence in all the "fun stuff" to which we have become addicted.
Something else that is over-stated in this article is the contribution of private generators to environmental destruction. If the writer had researched the EPA report on Data Center Energy Consumption (based on research by Lawrence Berkeley Labs, which he does cite), he would have found that the EPA strongly recommended local generation as a way to reduce our dependence on large power plants. Like the underused servers he dwells on, central power plants have to spin-up huge generators just to handle small power increments, and then incur the transmission losses over the routes needed to get the power to the users. Energy efficient, low emission, local generation is very possible, and citing one well-known firm that failed to get proper permits before starting its generators is hardly a typical example.
What is also highly distorted is the way the amount of power used for actual computing is stated. We all know that solid-state electronics convert all their electrical energy to heat, unlike rotating machines that convert power to the kinetic energy we can more easily see and appreciate. Mechanical systems are actually quite inefficient, so that actually makes electronics more efficient, not less. To imply otherwise is not honest reporting.
It would have been more realistic to explain how the industry is using PUE (the ratio of total power used by the data center to the power used for computing equipment), as a means of tracking and improving energy efficiency, and to have noted both what the industry leaders have demonstrated is achievable and what the "average" data center is still running. That would have been a better illustration of what the problem really is: The fact that we are capable of achieving much higher levels of energy efficiency in our data centers than what we are doing. There is also no mention of the enormous contributions of ASHRAE, The Green Grid, The Silicon Valley Leadership Council, 24 x 7 Exchange and others to the achievement of energy efficiency.
In short, this article had the potential of raising awareness of a legitimate reality in our "wired and connected" world. But it missed that opportunity by instead trying to impugn an industry that, in reality, has been working much harder than most to improve itself while simultaneously keeping up with an astronomical level of public demand.
Robert McFarlane is a principal in charge of data center design for the international consulting firm Shen Milsom and Wilke LLC. McFarlane has spent more than 35 years in communications consulting, has experience in every segment of the data center industry and was a pioneer in developing the field of building cable design. McFarlane also teaches the data center facilities course in the Marist College Institute for Data Center Professional program, is a data center power and cooling expert, is widely published, speaks at many industry seminars and is a corresponding member of ASHRAE TC9.9, which publishes a wide range of industry guidelines.
Clive Longbottom: The article is a bit of a Fear, Uncertainty and Doubt piece, to my mind.
"The industry's dirty secret?" I've been writing and talking about it for more than 10 years now, as have many other commentators. Sure, IT has been trying to keep it quiet, as they know their organizations would not be very happy to find that 90% of the energy going in to a data center is just wasted. Without a good plan as to how to move this more to the below 50% level, only a brave IT director would say, "Hey, guess what? I'm in charge of a system that is less than 10% efficient, and I don't have a clue what to do about it."
But, this was all pre-virtualization and cloud. Now, virtualized systems can be easily run at greater than 50% utilization rates, and cloud systems at greater than 70%. Cooling systems no longer have to keep a data center at less than 68 degrees Fahrenheit, as data centers can now run at close to 86 degrees Fahrenheit with little problem.
UPS systems are far more efficient than they used to be, and can often replace the need for generators to fire up when the power outage is only for a short period of time. All of these aspects mean that not only can the IT hardware be more effective, but also the amount of energy used to support the data center can be minimized. This then means that for a complete rework of a data center, its PUE can move from greater than 2.5 to less than 1.5 -- a massive shift in the efficiency of the data center.
But, there is a downside of this: To carry out such a rework means throwing out everything that is already there. Few organizations would be happy to throw out several million dollars of equipment and buy in several million dollars more. Therefore, the majority of data center are in a state of change at the moment: Old hardware is still being run as new hardware is being brought in, and the old equipment is planned for replacement as its effective end of life is reached. Therefore, over a period of five to 10 years, we will see a major shift in the efficiency of private data centers as they are reworked to adopt newer technologies. New data centers will be built to LEED and ASHRAE standards, and few will have PUEs above 1.3.
Then there is the increasing move towards co-location -- sharing the data center facility and making the most of economies and effectiveness of scale from the UPS's and generators involved in a massive facility -- even where the equipment being installed for a single customer is relatively small. More co-location providers are now insisting on sitting down with customers to ensure their racks and rows are engineered to latest standards, so that the overall effectiveness of the facility and the equipment it houses are optimized.
Then there is cloud -- a cloud provider has to try to be efficient, otherwise their business model will not work. It is just smart business to try to create a data center that has a low PUE, and where the equipment is being run to the most optimum utilization rates.
Finally, there are the government mandates themselves. Things like the Carbon Reduction Commitment Energy Efficiency Scheme in the United Kingdom means many organizations now find themselves being faced with large "taxes" on their carbon emissions, so a focus on the energy efficiency of all areas -- but in particular that accepted black hole, the data center -- is being seen across those organizations caught in the CRC trap.
So, in my mind, the piece is backwards-looking and negative, whereas if it had looked to the future instead and looked at what is now being done, it could have been written from a far more positive point of view. As a comparison, if the piece had been written about the car industry, the reporter would have been grumbling about cars only doing 10 miles to the gallon, coughing out lead and particulate fumes that stunt growth and harm brain development. Not many would say that with the auto industry now. Instead, commentators look at how hybrid engines are extending fuel economy, how diesel particulate filters are reducing the problems of air quality, etc.
Clive Longbottom is the co-founder and service director at Quocirca and has been an ITC industry analyst for more than 15 years. Trained as a chemical engineer, he worked on anti-cancer drugs, car catalysts and fuel cells before moving in to IT. He has worked on many office automation projects, as well as Control of Substances Hazardous to Health, document management and knowledge management projects.