What will the next big data center transformation look like?

No one can see the future, but take a look at what we might see in an upcoming data center transformation.

This Content Component encountered an error

It’s more science and less fiction: the next big data center transformation will see adoption and refinement of many of the technologies that have already demonstrated value in the enterprise. 

Sara stopped for a moment to sip her morning coffee and watch the live feed of a stunning sunrise over the Sierra Nevada mountain range displayed across the viewing wall in her office. It was a welcome distraction from her 12-hour shift in corporate’s IT management bunker deep below New Mexico’s desert. The moment’s peace was broken by a tweeting alarm as her virtual assistant flickered to life.

“I’m sorry to interrupt you, Sara,” the hologram said.

“What is it, Ivan?” Sara snapped, dabbing spilled coffee from her blouse. “Is that router backbone in Newark saturated again?”

“No, Sara. There is a critical power alarm at the Reykjavík facility.” The hologram gestured toward the viewing wall, opening a real-time diagram of the power grid and another detailed utility map of Iceland’s capital city. “Atlantic storms have caused a catastrophic fault in local power distribution. The city’s utility, Orkuveita Reykjavikur, reports that repairs may take up to 24 hours.”

Sara looked at the diagrams and grimaced. “What’s the workload status?”

“All 78,135 workloads were automatically migrated to other regional facilities in Edinburgh and Copenhagen. No data loss at this time. However, network load patterns are high for this time of day.” The hologram paused, processing possible alternatives. “I recommend switching to local cogeneration.”

“How long will it take the Tokamak to fire up?” Sara asked, recalling her short course on cold-fusion physics.

“It will take 30 minutes to bring the fusion unit online and restore the workloads to Reykjavík.”

“Take care of it, Ivan,” Sara said. “Let me know when Reykjavík is back up, and give us hourly status updates until main utility power is restored.”

“Thank you, Sara,” the hologram said, flickering out of sight.

Sara sat back, took another sip of coffee and called up the company’s global IT facility reports on the viewing wall. “What a way to start a Monday.”

Absolute Power

It’s difficult to predict the next data center transformation.  Still, technology has moved much faster than anyone could have imagined, and advances promise to continue over the next decade. Data centers will depend on these improvements to ensure adequate power, cooling and approaches to facility design. Let’s consider the steady—though hardly revolutionary—refinements that industry observers expect in power, cooling and facilities.

Today’s data centers demand more power than ever. Even when companies shift some computing needs to outsourcing or cloud providers, IT still proliferates far more business workloads as well as mobile and online services—and there’s no end in sight.

The good news is that data center power demands are moderating. In 2007, Stanford professor Jonathan Koomey predicted that data center power consumption would increase 100% from 2005 to 2010. In reality, the increase has been only about 36%, attributable primarily to economic conditions and the broad adoption of server virtualization.

Servers lighten their load. Server designs are increasingly energy-efficient and able to operate reliably at higher data center temperatures. It’s not just a matter of higher utilization. Servers are now more energy-aware, able to mitigate power use when workloads are idle. Previous generations of servers may have used 400 watts but still consumed 60% to 70% of that power even without performing any useful work, according to John Stanley, senior analyst for data center technologies at the 451 Group. “We’ve improved energy efficiency a lot, and newer generations of 1U commodity servers are improving,” he said. “Now an idle server might only use 25% to 50% of [its] total power.” Next-generation servers will use only a fraction of their total power when idle and actively power off when unneeded.

Servers are also evolving to be purpose-built rather than general-purpose, which enables the most energy-efficient utilization, said Pete Sclafani, CIO at 6connect, a provider of network automation services in San Francisco, Calif. “SeaMicro, AMD and Intel are introducing ARM processors and tailoring servers for specific purposes rather than throwing a Xeon at every computing problem,” he said. Chassis changes will further differentiate servers for specific purposes, such as using servers with lots of DRAM slots for high levels of virtualization.

Virtualization and server designs will also have a profound influence on data center cooling needs. With fewer purpose-built servers able to sustain higher environmental temperatures, there is simply less heat to remove, and this reduces the amount of power needed to drive the cooling systems (whether mechanical HVAC, chiller pumps, heat exchanger fans or other technologies).

Power distribution gets efficient. Reducing losses within a facility’s power distribution network is another potential improvement. Utility power enters a data center at high voltages that are stepped down to much lower voltages before being distributed to server racks and backup power systems. Each time utility voltage is converted, there are inevitable energy losses. “Some folks will run 480 volts AC all the way to the racks,” Stanley said, versus 240 or even 120 volts AC. “There is less conversion and less wiring.” Large data center operators such as Google have even evaluated the distribution of DC power directly to servers and other equipment, which would eliminate conversions, and this may also become more commonplace over the next decade to optimize energy use.

Power costs, availability and alternatives grow. Although Koomey’s report says that data centers currently demand only about 2% of total global energy, the cost and availability of power is a growing concern as the power grid continues to age and governments implement more aggressive carbon emission standards. “Power can be an issue in regional markets like Manhattan or San Francisco,” Stanley said. “Bringing an extra 10 megawatts into New York may be a problem. Businesses will. . .deploy a larger number of smaller data centers in second-tier locations with inexpensive power and good connectivity.”

Experts say that cogeneration—generating electricity on-site to replace or supplement utility power—with existing technologies won’t address all data center power problems. For example, solar panels are inefficient, wind is unpredictable and inconsistent, fuel cell hydrogen takes a great deal of energy to produce in the first place, and we’re nowhere near practical fusion reactors. Rather than invest in cogeneration sources on-site, data centers of the future will likely supplement power needs (or offset major utility prices) by contracting with an alternative energy provider such as a local wind or solar farm.

Kyoto wheel cooling
Figure 2. Kyoto wheel cooling. (Source: Chatsworth Products).

“Look for more complex mixtures of power and how it’s delivered,” said Sclafani. He suggested that some data centers may adopt the Kyoto wheel (see Figure 2) for short-term power ride-through (the ability for equipment to continue operating throughout a utility outage), perhaps supported by a natural gas power plant for long-term power production and further supplemented by solar panels on the roof.

Power efficiency metrics should matter. The greatest problem for future data centers is adopting meaningful metrics to gauge energy efficiency. Current metrics like power usage effectiveness (PUE), carbon usage effectiveness (CUE) and water usage effectiveness (WUE) are well established, but none measures the efficiency of IT.

Experts agree that the ultimate metric would be an objective measurement of “useful IT work per watt.” The problem is defining what constitutes useful IT work. The “useful IT work” performed by a scientific computing community differs from that performed by Web providers, financial services companies and so on. This concept requires a move from comparative metrics to a focus on internal metrics: defining what works for a specific organization (or application) and basing the metric on that need.

Keeping Our Cool

Part of the energy used to power servers and other equipment is shed as heat rather than computing work, so data centers face the problem of eliminating this heat before it damages equipment.

While this reality of data center operation won’t change, the cooling demands of computing equipment, and the technologies used to achieve that cooling, have changed radically. Experts don’t predict revolutionary new cooling technologies in the years ahead, but rather continued refinement and broad adoption of alternative cooling systems.

Blub, blub, blub... Immersion-cooled servers

Science fiction has long envisioned the massive computing systems of future spacecraft to be totally submerged in a bath of cold liquid. It’s easy to understand why: liquids can conduct heat much more efficiently than air. This is a principle of physics that has been understood for a long time. However, computer designers have deliberately stayed away from liquid cooling techniques because common cooling mediums are either conductive or caustic—two characteristics that are absolutely detrimental to electronics.

But the fictional vision of submerged computer hardware may become a reality sooner than we imagined. In September of 2012, Intel completed a yearlong study on server cooling using simple mineral oil, which is just as good at conducting heat as water, yet is not electrically conductive and does not corrode and damage electronic components submerged within.

Since mineral oil is also a fairly heavy and viscous material, a cooling bath provides protection against short-term loss of cooling power. Imagine a swimming pool. It takes a long time for the mass of pool water to warm up or cool down. If air conditioning or cooling fans fail, hardware can suffer the adverse effects of skyrocketing air temperatures in a matter of minutes. But if the server is submerged in a bath of cold liquid, the liquid will continue to absorb heat for a considerably longer time in the event that the liquid’s refrigeration unit fails. This kind of ride-through behavior alleviates much of the worry experienced with traditional mechanical cooling disruptions.

And immersion cooling can be extremely effective. According to Green Revolution Cooling, which developed the immersion baths Intel used, it is possible for immersion cooling to handle heat loads up to 100 kW for a 42U (7 foot) rack. This is much more than current air-cooled racks that run up to 12 to 30 kW.

But there is also a downside: liquid coolant is just plain messy. Imagine having to remove a server from a liquid bath to troubleshoot or upgrade it. Technicians will need some type of protective garb and an entirely new protocol for working on immersion-cooled equipment. Spills from large baths can also create serious safety and cleanup problems.

A mix of cooling technologies emerges. Future data centers cannot rely exclusively on mechanical refrigeration, but it’s unlikely that computer room air conditioning (CRAC) units will disappear in 10 years’ time. Stanley expects future data center designs to feature a mix of traditional and alternative cooling technologies. “Free cooling is not an all-or-nothing strategy,” he says. “I’ve seen some data centers use free cooling, evaporative cooling and a small chiller for the last few degrees.”

More appropriate data center locations needed. Environmental cooling alternatives can be adopted in almost any location, but to reduce the use of mechanical refrigeration, next-generation data center sites must be selected to accommodate environmental cooling. For example, it’s possible to build a data center in the desert, but chances are that air-side economizers will run only during evening hours—leaving energy-guzzling CRAC units to run during the day. In the coming decade, businesses will opt to build next-generation data centers in cooler climates, near sources of cold water, beneath suitable geologies or where other environmental cooling features are accessible.

Operating temperatures rise. It’s simple logic: You can reduce cooling needs with equipment that doesn’t need as much cooling in the first place. In 2011 the American Society of Heating, Refrigerating and Air Conditioning Engineers (ASHRAE) updated its 2004 document “Thermal Guidelines for Data Processing Environments” and outlined two new classes of data center equipment capable of sustained operation at elevated temperature and humidity levels. For example, ASHRAE Class A4—the current highest class—computing equipment will operate from 5 degrees to 45 degrees Celsius at 8% to 90% relative humidity.

Still, the move to higher operating temperatures (and the reduction in cooling needs) is not a single event; it is a process that will take data centers through the next decade. Robert McFarlane, principal at Shen Milsom & Wilke, a consulting and design firm, noted that server vendors have started marketing servers that accommodate higher ASHRAE classes, but operators need all the enhanced equipment in place before higher data center temperatures can be allowed. Replacing hundreds (or even thousands) of servers for high-temperature environments may take several technology refresh cycles so professionals can get comfortable with elevated data center temperatures.

McFarlane said that if the enhanced ASHRAE classes are universally adopted by server makers, higher operating temperature capabilities may become a standard feature of next-generation servers—eventually allowing all future data centers to adopt elevated temperatures.

Incremental cooling improvements continue. Despite the absence of new cooling alternatives on the horizon, next-generation data centers can deploy improvements to established cooling systems and make the most of alternative cooling methodologies.

For example, McFarlane noted the importance of using variable frequency drives in cooling systems. A great deal of mechanical equipment operates motors at a fixed speed—the motors are on or off—but introducing variable-speed motors can save significant energy. For example, water-side economizers rely on pump motors to move water and air, but it may save energy to run those motors at lower speeds. In many cases, retrofitting existing pumps and motors with variable-speed units and controls can be cost-effective for renovation projects.

Proper airflow will be a critical attribute of future data centers. Stanley says that careful airflow evaluation, containment and leakage management will remain effective for all but the highest density systems. “You can cool up to 20 to 25 kW per rack with proper airflow,” he said, noting that most racks run at 5 to 10 kW. “That’s a much higher density than we see today.”

Another incremental improvement is a closer coupling between the cooling medium and servers. For example, rather than using chilled water to cool the air introduced to a data center, the water can be circulated to server racks or even to the processor chips within servers. While these concepts are hardly innovative, experts agree that reducing the distance between the cooling source and cooling targets is an important strategy for future cooling efficiency.

Perhaps the most futuristic cooling scheme is full immersion cooling—submerging the entire server in a bath of chilled, nonconductive medium such as oil. In September 2012, Intel Corp. announced the successful conclusion of a yearlong test that submerged an entire server within a chilled coolant bath. With “direct contact” cooling, the heat sink basically becomes the coolant, allowing even higher clock speeds for greater computing performance. (See sidebar on page TK for more.) “One major advantage here is ride-through,” said McFarlane. “With immersion, there is enough thermal mass to keep equipment cool until generators and backup systems come up.”

Operational sustainability is key. Next-generation data centers can significantly reduce power and cooling demands, and it is possible that traditional mechanical cooling may be all but removed. Still, there will always be a need to move heated air or pump water. Cooling won’t become passive.

But as new facilities emerge to maximize the use of free cooling such as air and water economizers, operators are pushing the limits of these technologies. Plan for growth, contingencies and problems in data center cooling. “Facility operators must be vigilant and rigorous,” said Vincent Renaud, managing principal at the Uptime Institute. “Operational sustainability is as important as efficiency.”

On the Horizon: Still Building Data Centers

Although proponents of the cloud may predict an eventual move to universal outsourcing, data center experts note that data center construction is alive and well. Outsourcing will become a common practice to some extent, but it presents issues for corporate management that wants to ensure privacy, data security and exclusive control of its business’ future.

Consider cloud resellers that are simply selling services through a cloud that they don’t control any more than you do. “How many cloud providers are built on Amazon Web Services?” Sclafani said. “It may not even be the provider’s own cloud.” Every business that embraces outsourcing must ensure custody of its data regardless of how it’s outsourced. This underlying issue is unlikely to change as governments ratchet up privacy and regulatory constraints on business into the future.

“A lot of people are building their own places for their own enterprise gear, applications and so forth,” Renaud said, noting that the traditional arguments for a build are holding steady—especially in highly regulated industries—and will probably drive future builds. “There seems to be a sense of pushback, but people are building.”

Renaud notes international resistance to outsourcing and colocation. “There’s a lot of data center builds going on in Russia now,” he said. “They are dead set against using colocation.”

Build a larger number of smaller facilities. The industry may find examples in massive data center builds like Google or Yahoo, but Sclafani notes that monolithic data centers may not be appropriate for most businesses. Instead, designs for the upcoming data center transformation are likely to be smaller and more distributed to maintain performance and move computing and storage closer to users. “For higher-performing applications with user interaction, you can’t have a data center five or six hops away,” he said.

The environment becomes more important. Site selection for a future data center will always include close examination of power costs, cooling requirements and local taxation, but environmental impact will become far more important to next-generation data center designs. Consider a data center that relies on free air cooling in concert with a traditional CRAC unit. Locating the new data center next to a busy freeway or a few miles from an industrial center may seem like a convenient choice, but the smog will quickly clog air filters and reduce cooling air flow (along with free air effectiveness). This will cause the CRAC unit to work much harder and escalate energy costs while affecting energy-efficiency metrics like PUE.

The growing use of carbon taxes will also encourage future data centers to more closely evaluate energy sources. The social and marketing “costs” of using electricity from fossil fuel sources may become much higher than the cost per kilowatt-hour.

A closer emphasis on environmental concerns will require more careful instrumenting and monitoring within the future data center. In the previous example, a reduction in inlet air flow rates without a change in fan speeds means that the air filters are clogged and require maintenance. “We have to get away from long, static replacement or maintenance schedules and tie facilities and IT closer together,” Sclafani said. “We have to make the buildings smarter.”

Idea in belief

Over the next decade, data center power and cooling technologies won’t undergo major transformation, but they will undergo incremental advancements. Next-generation data centers have an opportunity to improve energy efficiency through various approaches, including the following:

  • Lighter-load servers that are custom rather than general purpose-built;
  • More efficient power distribution and new mixtures of cooling technologies, such as free cooling, evaporative cooling and a chiller;
  • More appropriate data center site selection and moves toward building a greater number of smaller facilities;
  • A gradual move toward more elevated data center temperatures;
  • New cooling improvements, such as variable-speed motors; and
  • More realistic data center power usage targets.

Design future data centers for actual usage. For decades, IT has vastly overprovisioned new data center designs to power and cool rack densities that, in most cases, only operate at a small fraction of the anticipated load. “The average rack in the data center [is] at two kilowatts; that’s much lower than the 20 kilowatts per rack planned for,” Renaud said, noting that tremendous amounts of capital can be saved through more careful analysis and capacity planning. “Press IT to know what is actually needed.”

Containerized data centers have grown a great deal, but experts say that this kind of computing does not offer the flexibility provided in a data center build. A container, however, may be an ideal choice to host a Hadoop cluster or supplement a facility facing a temporary computing crunch. In addition, companies that cannot afford to construct a complete facility up front (or don’t want to take the risk) can deploy a modular data center design in phases as needs evolve.

And finally, don’t allow IT or facilities to design your next data center. A business must find a developer with demonstrated expertise in data center design technologies—it’s a specialty and should be approached that way. McFarlane notes that too many new builds use traditional developers who don’t fully understand how to deploy the latest cooling technologies or streamline airflow patterns for next-generation cooling systems. Don’t make this mistake. “We need to educate the industry that there are solutions much better than what is still done in data centers,” said McFarlane.

Future data centers will not be a fanciful science fiction romp; there is simply too much time, money and business activity to risk on unproven or experimental technologies. But many well-understood and established technologies today will undoubtedly influence your next build, reducing power demands, reducing cooling requirements and offering a facility that best fits the mission of your business.

About the author:
Stephen Bigelow is senior technology in the Data Center and Virtualization group. He has written more than 15 feature books on computer troubleshooting, including Bigelow’s PC Hardware Desk Reference and Bigelow’s PC Hardware Annoyances. Find him on Twitter @Stephen_Bigelow.

This article originally appeared in the December/January issue of Modern Infrastructure.

This was first published in January 2013

Dig deeper on Data center design and construction

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchWindowsServer

SearchEnterpriseLinux

SearchServerVirtualization

SearchCloudComputing

Close