shyshka - Fotolia
In the current media cycle around the severe California drought, the majority of focus has been on residential and agricultural users. Industrial users such as data centers have largely been ignored, until now.
In recent decades, the American West has experienced a prolonged and systemic drought. Currently, over 70% of California is in "extreme" drought, with nearly half of the state in "exceptional" drought, according to the U.S. Drought Monitor.
Population growth and climate change will create additional global water demand. The problem of water scarcity is not going away.
A recent article from The Wall Street Journal called out the data center industry, claiming that a mid-size data center consumes 130 million gallons of water annually, or the equivalent of 100 acres of almond trees, three hospital buildings, or two 18-hole golf courses.
The Journal's concept of a "mid-sized" data center (15 MW) is ludicrously out of scale. According to the Uptime Institute's survey data, an average data center deployment is about 1 MW, and would consume approximately 7 to 8 million gallons of water annually. That's still six acres of almond trees, five holes of golf, or a little less than a third of a hospital.
Regardless of the accuracy of the Journal's claims, the spotlight is on data center water usage. Let's examine how data centers consume water, the design choices that can limit water use, and the IT industry's awareness of and appetite to address this issue.
How do data centers use water?
The primary way data centers use water is for heat rejection (i.e., cooling IT equipment).
The traditional method of cooling a data center utilizes a water-cooled chilled water system. In these systems, cool water is distributed to the computer room cooling units. A fan blows across the chilled water coil, providing cool, conditioned air to the IT equipment. That water then flows back to the chiller and is re-cooled.
Water-cooled chiller systems rely on a cooling tower to reject heat from this system. A cooling tower is a large box-like unit that cools the warm water (or condenser water) from the chiller by pulling in ambient air from the sides and blowing hot, wet air out of the top of the unit by fan. The cooled condenser water then returns back to the chiller to again accept heat to be rejected.
These cooling towers are the main culprit for water consumption in a traditional data center design.
Let's assume a 1 MW data center pumps 1,000 gallons of condenser water per minute through a cooling tower. The cooling tower will lose between 1-2% of that water to evaporation and drift -- water that is blown away in a fine mist by the fan or wind.
That comes out to about 6.7 million gallons of water consumed annually.
An additional 1.3 million gallons of water per year are lost in blowdown, or the replacement cycle. As the condenser water is repeatedly evaporated and exposed to the atmosphere, it picks up minerals, dust and other contaminants. That water must be treated, and/or dumped out at regular intervals.
In total, a 1 MW data center using traditional cooling methods uses about 8 million gallons of water per year.
Today, many data centers adopt new cooling methods that are more energy-efficient and use less water than traditional chillers and cooling towers combinations. These cooling methodologies reduce annual water consumption by integrating evaporative cooling technologies and an economizer that utilizes outdoor air. In Uptime Institute's experience certifying data centers around the globe, about one-third of new builds use some form of cooling system that does not utilize traditional chilled water and cooling tower combinations.
There are some data centers that use direct air cooling. Just open the windows and let the atmosphere wash over all that sensitive IT equipment. Christian Belady, Microsoft general manager for data center services, proved it could be done, running servers for long periods in a tent. This unusual approach is limited by climate, and more importantly an organization's willingness to accept risk of IT equipment failure due to fluctuating temperatures and airborne particulate contamination. The majority of organizations that use this method do so in combination with other cooling methods.
With direct evaporative cooling, outside air is blown across a water-saturated medium or via misting and cooled by evaporation. This cooled air is circulated by a blower to cool the servers. This approach, while more common than direct outside air cooling, still imposes risk to the IT equipment due to outside contaminants from external events like forest fires, dust storms, agricultural activity, or construction, which can impair server reliability. These contaminants can be filtered, but many organizations will not tolerate a contamination risk.
Some data centers use what's called indirect evaporative cooling. This process uses two air streams: one closed-loop air supply for IT equipment, and an outside air stream that cools the primary air supply. This outside (scavenger) air stream is cooled by direct evaporative cooling. The cooled secondary air stream goes through a heat exchanger, where it cools the primary air stream. The cooled primary air stream is circulated by a fan to the servers.
Additionally, there are systems that require no water -- dry coolers that use pumped refrigerant instead of water evaporation to cool the air supply. There are also air-cooled chilled water systems, which do not utilize evaporative cooling towers to reject heat.
Leading by example
Some prominent examples of data centers using alternative cooling methods include:
- Vantage Data Centers' site in Quincy, Wash. uses Munters Indirect Evaporative Cooling systems.
- Rackspace's data center in London and Digital Realty's Profile Park site in Dublin use roof-mounted indirect outside air technology coupled with evaporative cooling from ExCool.
- In a first phase, Facebook's Prineville, Ore. data center used direct evaporative cooling and humidification, with small nozzles attached to water pipes that sprayed a fine mist across the air pathway, cooling the air and adding humidity. In a second phase, it used a dampened media.
- Yahoo's Chicken Coop data center design in upstate New York uses direct outside air cooling when weather conditions allow.
- Metronode, a telecommunications company in Australia, uses direct air cooling (as well as direct evaporative and DX for backup).
Facebook reports that its Prineville cooling system uses 10% of the water of a traditional chiller and cooling tower system. The Excool indirect evaporative cooling product marketing claims a 1MW data center consumes roughly 260,000 gallons annually, 3.3% of traditional data center water consumption. And the data centers using pumped refrigerant systems consume even less water.
Where do we get data center water?
Municipal: The majority of data centers rely on water from a municipal source. Throughout the hundreds of data center certifications Uptime Institute has conducted, the vast majority use municipal water, which typically comes from reservoirs.
Groundwater: Groundwater is precipitation that seeps down through the soil and is stored below ground. Many data center operators drill wells on their site to access this water. Worldwide, groundwater tables are falling. The United States Geological Survey has published a resource to track groundwater depletion.
Rainwater: Rainfall provides an unreliable, variable water source for data center usage. Some data centers collect rainwater and use it as a secondary or supplemental water supply.
Body of water: A handful of data centers around the world access water directly from lakes, rivers or the ocean. In these cases, a data center operator pumps the source water through a heat exchanger. A data center may also use a body of water for an emergency water source for cooling towers or evaporative cooling systems
All of these water sources are interdependent. Water from any source could prove difficult to access during a sustained regional drought.
Why not all data centers?
As these new cooling designs can provide significant energy and water usage reductions, why wouldn't every data center use them?
These cooling systems mandate a 50 to 100% cost premium over traditional cooling. For an in-depth financial analysis, read Compass Datacenters' study on the potential negative return on investment for an adiabatic (i.e. evaporative) cooling system. Fundamentally, we currently operate in an era of cheap power and water. Someday the price for our resource consumption will come due, but until that time, the economics are such that the ROI on these more expensive systems can take years to achieve, if ever.
These systems also tend take up significant amount of space. For many data centers, water-cooled chiller plants make more sense because an owner can pack in capacity in a relatively small footprint without modifying building exteriors.
There are also implications for data center owners who want to achieve Uptime Institute's Tier Certification. Achieving Tier III Constructed Facility Certification requires the isolation of each and every component of the cooling system without impact to design day cooling temperature. This means an owner needs to be able to tolerate the shutdown of cooling units, control systems, makeup water tanks and distribution, and heat exchangers. Tier IV Fault Tolerance requires the system to sustain any single but consequential event without impact to the critical environment. While many data centers using the new cooling designs have been certified to Uptime Tiers, it does add a level of complexity to the process.
Organizations also need to factor temperature considerations into their decision. If you're not prepared to run your server inlet air temperature at 22 degrees Celsius (72 degrees Fahrenheit), there is not much payback on the extra investment. Also, companies need to start with good computer room management, including optimized airflow for efficient cooling, and potentially containment which can drive up costs. Additionally, in hot and humid climates, some of these cooling systems just won't work.
Also, as with any newer technology, alternative cooling systems present operations challenges. Organizations will likely need to implement new training to operate and maintain unfamiliar equipment configurations. Companies will need to conduct particularly thorough due diligence on new, proprietary vendors entering the mission critical data center space for the first time. Caveat emptor.
And lastly, there is significant apathy about water conservation across the data center industry as a whole. Uptime Institute survey data shows that less than one third of data center operators track water usage or use the Green Grid's Water Usage Effectiveness metric. And according to Uptime Institute's 2015 Data Center Industry Survey, in a question asking data center operators about the most important metrics, water usage ranked near the bottom of priorities. The only thing data center managers said they care about less than water is carbon dioxide emissions.
But the volumes of water or power used data centers make them an easy target for public finger wagging. While there are a lot of good reasons to choose traditional chilled water systems, especially when dealing with existing buildings, for new data center builds, owners should evaluate alternative cooling designs against overall business requirements, which might include sustainability factors.
Uptime Institute has invested decades of research toward reducing data center resource consumption. The water topic, while currently an acute issue in California, needs to be assessed with a larger context of a holistic approach to efficient IT. With this framework, data center operators can learn how to better justify and explain business requirements, and demonstrate that they can be responsible stewards of our environment and corporate resources.
Keith Klesner, Ryan Orr and Matt Stansberry work for Uptime Institute, The Global Data Center Authority. Stansberry is the director of content and publication, Klesner the vice president of strategic accounts, and Orr is a senior consultant with Uptime Institute.
- How to prepare for serverless architecture –TechTarget