alphaspirit - Fotolia

Bright data center ideas that lacked staying power

The data center has seen its fair share of failed attempts to uproot traditional thinking. New ideas for power, cooling and maintenance top the list.

Data center technologies move at such a rapid pace that we often forget about the ideas that didn't quite measure up, leaving us vulnerable to repeating the same mistakes.

Errors signal an attempt at progress. Without mistakes, we wouldn't be trying hard enough to move forward. Still, we've seen "the latest, greatest" data center ideas touted in many areas such as power, cooling and server design, only to quietly fade away.

Fight the power

Power and cooling systems have had their share of failures. High-voltage power schemes were all the rage in the late 2000s. Instead of running three-phase AC power to the racks, we converted the utility power outside the data center into high-voltage DC power, typically 380-800 VDC.

The higher voltage limits cable losses, while the centralized conversion to DC power simplifies conversion to low DC voltages in each rack. There was enough value in this concept that IT businesses desperate for power savings gave the idea credence.

One issue that bedeviled the high-voltage approach is that every vendor had their own scheme: Even today, there are too many options being discussed and no consensus.

At Verari Systems (today Cirrascale), in my tenure with the company, power ran at 800 VDC converted in the rack. These were custom 35 KVA racks, so reducing waste was important. Power was converted to +/- 400 VDC on each shelf and distributed to a down-converter in each node.

Unfortunately, while power losses were reduced, the high-voltage scheme for data centers ran into a perfect storm of rapidly improving IT equipment power efficiency coupled with falling prices in standard AC-to-DC supplies. This demolished the value proposition for the high-voltage approach. It is also difficult to find efficient converter designs to create high-voltage power in the redundant source mode that most data centers require.

DC power is still in the running, though, as a data center idea for the future. Solar photovoltaic arrays and wind turbines generate renewable DC power. Today, the need for generators in close proximity to the data center restricts practical use. The low cost of traditional electricity in many locations, coupled with global warming skepticism, pushes these approaches farther from adoption in the mainstream.

Inside the rack, power has gone through a few changing trends. Large power supply units tend to be more efficient than small ones. With servers standardizing to 12 VDC power, scale-out designs cluster around two or three efficient two-phase to 12 V converters, protecting against a power failure.

There is a persistent myth that power losses are too high to make large clusters, but I've run over a dozen full-sized servers from one set of power supplies without problems. This is a great way to handle clusters of identical server or storage nodes. We continue, however, to see racks of 1U servers with their own power supply. This is one data center concept that hasn't yet fallen to the wayside, but should.

Chill out

We've seen more cooling techniques than we'd like to admit: Water pumped from icy lakes, data centers built in the frozen north and other ideas to cool racks directly. Many data centers are kept cold, commonly in the range of 20 degrees Celsius (68 degrees Fahrenheit), and require a lot of power to stay that way.

There is a persistent myth that power losses are too high to make large clusters, but I've run over a dozen full-sized servers from one set of power supplies without problems.

Most of these strategies do achieve targeted cooling, but new ideas are replacing the freeze-the-servers mentality. Commercial off the shelf servers don't need to be treated like mainframes. With a bit of care on the outflow side of the racks and particulate filtration, many servers will work at 40 degrees C (104 degrees F). With careful design, 45 degrees C (113 degrees F) is possible, as long as disk drives are functional. Newer drives have 5 degrees C more headroom on ambient air temperature, and solid-state drives can take higher heat than hard disks.

Data center cooling systems are generally bound for the history books. Water-chilled racks, and refrigeration modules on the inlet air side of racks will become obsolete. In some cases, extra efforts in cooling will come into play, but only for military specification and ruggedized deployments.

Get off life support

We used to maintain every unit slavishly; every failure was a big deal. Drives were pulled from failed servers -- or they died -- and bad servers were replaced quickly. The cloud approach kills all of this. With virtualization, a workload is restarted on another server in the event of a failure. This allows for deferred repairs until multiple problems need fixing -- or you can ignore them completely and refresh the fleet after a few years.

The cost of replacing servers and storage, or refreshing the fleet more quickly, is offset by savings in maintenance technician labor and high-cost spare parts. While unit repair isn't dead, it's fading fast.

Pull it together

The idea of building server deployments into racks is quickly disappearing. Buying converged infrastructure -- pre-loaded racks that are wired and tested by the vendor -- is common and growing. The next step is self-contained data centers, also known as modular data centers, which need an inexpensive building to house them or operate in situ without housing. The idea of the traditional data center is challenged by the cloud and the modular container.

About the author:
Jim O'Reilly is a consultant focused on storage and cloud computing. He was vice president of engineering at Germane Systems, where he created ruggedized servers and storage for the U.S. submarine fleet. He has also held senior management positions at SGI/Rackable and Verari; was CEO at startups Scalant and CDS; headed operations at PC Brand and Metalithic; and led major divisions of Memorex-Telex and NCR, where his team developed the first SCSI ASIC, now in the Smithsonian.

Next Steps

Tour a next-generation data center with warm cool aisles and open-source hardware in this video from Facebook's North Carolina data center

Plan your own modern data center from design through operations

Consider three varying approaches to new data center construction

Dig Deeper on Data center design and facilities