Data centers continue to pack more computing power into smaller spaces to consolidate workloads and accommodate processing-intensive applications, such as AI and advanced analytics. As a result, each rack consumes more energy and generates more heat, putting greater pressure on cooling systems to ensure safe and efficient operations.
In the past, data centers could rely on air cooling to maintain safe operating temperatures. But keeping up with the greater densities presents a significant challenge for air cooling, causing many organizations to look into liquid cooling.
There are many factors to consider when debating liquid cooling vs. air cooling. This article describes these two main types of data center cooling methodologies and compares their benefits, drawbacks and costs.
What is air cooling?
Data centers have been using air cooling since their inception and continue to use it extensively. Although technologies have evolved over the years, with cooling systems growing ever more efficient, the basic concept has remained the same. Cold air is blown across or circulated around the hardware, dissipating the heat by exchanging warmer air with cooler air.
The main differences between air cooling systems lie in how they control airflow. The systems are generally categorized into three types: room-, row- and rack-based.
Room-based systems come in several varieties. The air might be circulated around the entire equipment room, or floors might be raised near the equipment and air pushed through vented tiles. More recently, room-based systems have incorporated hot and cold aisles to better control airflow and target the equipment. In newer systems, containment is used to direct air flow with even greater accuracy.
With a row-based approach, each row contains dedicated cooling units that target the airflow at specific equipment. This approach improves cooling efficiency and reduces the amount of fan power required to direct air flow.
A rack-based system takes this a step further by dedicating cooling units to specific racks, achieving even greater precision and efficiency than the other approaches. However, this system requires more devices and creates more complexity.
Over the years, air cooling has proven to be an invaluable tool for protecting data center equipment. The technologies behind it are well understood and widely deployed. Data center personnel are familiar with air cooling and what it takes to keep it running. Maintaining these systems is a straightforward process with plenty of industry experience behind it.
Cons of air cooling
Unfortunately, air cooling also presents several challenges. At the top of the list is its inability to meet modern workload demands. Air cooling simply cannot keep up with the increased densities and heavy processing loads. At some point, the capital outlay for air cooling can no longer be justified. Already, air cooling represents a significant percentage of data center Opex. Rising energy costs only exacerbate the issue.
Water restrictions and costs can also present a challenge for air cooling systems that rely on evaporative cooling or cooling towers. In addition, higher computing densities translate to more cooling fans and pumps, making data centers so noisy that personnel must wear protective hearing devices.
The underlying problem is that air is not an effective heat transfer medium, despite its widespread use, and a better cooling solution is needed to meet today's data center demands.
What is liquid cooling?
Data centers have been starting to adopt liquid cooling for more than just mainframes and supercomputers. Water and other liquids are far more efficient at transferring heat than air and can help address some of the challenges that come with air cooling systems, especially as computing densities increase.
One liquid cooling technology that's gaining traction is direct-to-plate cooling. In this configuration, a cold plate sits directly next to a component -- such as a CPU, GPU or memory card. Small tubes connect to the plate to bring cool water and remove warm water. The warm water is then cooled and circulated back to the device.
A similar concept can be applied at the rack or server level. Water or another type of coolant is circulated through a closed-loop system to carry out the heat exchange. Although the exact process varies from one solution to the next, they typically use a contained coolant, an exchanger to dissipate the heat and a mechanism to lower the coolant temperature as it circulates. For example, an exchanger could be mounted on the back of the rack with fans on the opposite side to circulate the air and dissipate heat. Another system might pipe the coolant underground to provide geothermal cooling.
A newer technology making headway is immersion cooling. In this approach, all internal server components are submerged in a nonconductive dialectic fluid. The components and fluid are then encased in a sealed container to prevent leakage. In this way, the heat from the components is transferred to the coolant, which is circulated and cooled to continuously dissipate the heat.
Because liquid cooling can conduct heat better than air, it can handle a data center's growing densities more effectively, helping to accommodate compute-intensive applications. In addition, liquid cooling significantly reduces energy consumption, and it uses less water than many air cooling systems, which can lead to lower operating expenses. Liquid cooling also takes up less space and produces less noise.
Cons of liquid cooling
Despite these advantages, liquid cooling has its downsides. In addition to a higher Capex, it requires IT and data center administrators to learn new skills and adopt a new management framework, which can represent a significant undertaking.
It might also mean bringing in new personnel or consultants, undermining the Opex advantage even more. In addition, the liquid cooling market is still maturing, with a wide range of technologies, resulting in proprietary products and the risk of vendor lock-in.
Factors to consider when choosing air cooling vs. liquid cooling
Organizations setting up new data centers or updating existing ones might be evaluating whether it's a good time to implement liquid cooling or to stick with tried-and-true air cooling. If so, they need to factor in several important considerations.
Cost will undoubtedly be one of the deciding factors, but arriving at a true TCO can be a complex process. Although liquid cooling comes with a higher Capex, its greater efficiency can translate to lower Opex, especially as densities grow. In addition, liquid cooling uses less power and water, which can be especially important in areas where water is in short supply. On the other hand, the risk of vendor lock-in could impact long-term TCO.
The computers themselves should also be considered when evaluating systems. Liquid cooling makes it possible to support greater computing densities while reducing the data center footprint, leading to better space utilization and lower costs. The support for greater densities can benefit an organization that's been unable to implement processing-intensive workloads because of air cooling limitations. Supporting these workloads could translate into more savings.
Ease of installation and maintenance
Another important consideration is what it will take to deploy and maintain a cooling system. With air cooling, operating the equipment and swapping out components is generally straightforward. That's not to say air cooling doesn't present its own challenges, such as ongoing water treatment or mechanical maintenance, but it's a known entity with a long history to back it up.
Liquid cooling represents a new mindset and a new way of working. IT and data center teams will have a steep learning curve and, in some cases, might be dependent on a vendor for routine maintenance. For example, what if IT needs to replace the memory board in a server that uses immersion cooling? When analyzing costs, organizations must evaluate all the implications of deploying and maintaining a cooling system.
Some organizations don't support the type of advanced workloads that require high processing density, so a switch to liquid cooling might not be warranted. That said, densities are only likely to grow in the coming years as data centers scramble to better utilize floor space and IT consolidates workloads to improve efficiency. At some point, liquid cooling might become the only viable option, but that doesn't mean organizations have to rush into it.
Other considerations can also play a role when deciding on liquid cooling vs. air cooling. For example, an organization might be moving toward greener data center practices and might want to embrace technologies such as liquid cooling, which uses fewer resources and is much quieter. Location can also be a factor. A data center near the Arctic can use the plentiful cold air while one near factories or other harsh settings might have difficulty maintaining air cooling systems. A data center in a crowded urban setting might need to increase computing density to maximize floor space. Local regulations, tax advantages or similar issues can also play a role.
The role of technology maturity in cooling selection
One of the biggest challenges with liquid cooling is that it's a nascent industry when it comes to anything other than mainframes and supercomputers. It's therefore hard to gauge which technologies will emerge as leaders, how the technologies might be standardized or what to expect four or five years from now.
With air cooling, organizations know what they're getting into, but its long-term practicality might be limited. Organizations that don't need to rush into a decision might want to give liquid cooling more time to mature. Those already feeling the crunch might consider a phased approach toward liquid cooling.