The vast majority of electrical components in a data center run on low-voltage direct current power --chips, resistors, capacitors and so on. Yet we insist on bringing in high-voltage three- and single-phase alternating current energy, which has to run through step-down transformers until the desired voltage is reached. Each transformation involves a loss of energy; even the most modern transformers will generally be no better than 98% efficient.
Consider the power distribution path to your data center and then within it. First, the power generated at the power station must be boosted to a high voltage suitable for transmission. Next is the substation, where it will be brought back down to a distribution
Eight percent of the generated energy has been lost purely through transformers, and this is only where the transformers are 98% efficient. And, in reality, it is far worse than this.
The power distribution within the data center tends to be single-phase alternating current (AC). Each item of IT equipment has a main transformer inside it with multivoltage outputs; many more onboard transformers provide the voltages required by specific components. However, larger transformers tend to be more efficient than smaller ones, so these mini-transformers may only be 95% efficient. If this is the case and six steps of transformation are involved, then more than a quarter of the generated energy is lost in transformation.
Therefore, it is surely good practice to use direct current (DC) power across as many areas as possible, avoiding the need for additional transformers in data center equipment. But a number of issues need to be taken into account to see whether this is really possible.
Hurdles to clear before using DC power
First, at a power transmission level, DC is highly inefficient. Through the laws of physics, to get a set amount of power (in watts) down a line is voltage (in volts) times current (in amps). High transmission voltages in the hundreds to thousands of kilovolts (kVA) are used to lower the current. This is important, as resistive losses are linked to the current. Direct current or alternating current can be used at high voltages for power transmission, but AC has been the main choice to date. Trying to eliminate the transformation losses for the transmission stage is unlikely to be workable. It's pretty much the same at the distribution level; changing the existing infrastructure from AC to DC is not going to be easy, and there would still be a need to use high-voltage DC, so transformation is still required.
Those who advocate for direct current data centers point to the telecommunications industry, whose facilities have run using DC for a long time. There are reasons behind this, however. A full industry was built up around the provision of DC equipment, and there was less need for multiple DC voltages within an old telecoms facility. Therefore, power distribution within such a facility could be carried out using transformers built into the fabric of the facility, and then large copper busbars would enable distribution of high-current DC power. Also, in the 1980s and 1990s, although variations in oil prices caused some uncertainty in energy pricing, the main generation capability was still through relatively cheap and available coal. Energy efficiency was not the focus it is today.
The main issue for a direct-current powered data center is that the equipment is still not widely available. Although the components are DC, the vendors have to focus on the mass market and build equipment that uses AC as its main input. Some vendors do DC versions of their equipment, but they are premium priced.
With the additional costs of equipping a facility with DC power transformers, DC power distribution and management systems, using servers, storage and network systems that perform the same as the AC variants but at a higher price just does not make sense. Better to follow the crowd and use the lower cost standardized, AC-based systems.
Making a DC data center possible
There are two big hopes for the DC data center, however. One is modularization. As systems such as Cisco's UCS, Dell's vStart, IBM's PureSystems and others come out, these preconfigured "blocks" can be wired internally in any way the vendor wishes. It makes sense for the vendor to cut out unneeded components, and multiple transformation stages can be removed during the design and build phase.
In the same way that cooling systems are moving from the facility to the module with in-rack cooling systems, it becomes far more likely that power management will move from the facility to the module as well, with in-facility power distribution based around the provision of single-phase AC cabling to where it's needed.
The second hope is the growing prevalence of cloud computing. For a cloud provider with tens to hundreds of thousands of servers, massive scale-out storage infrastructure and a complex network structure, buying in to DC from the start could create a viable payback period.
A strategic decision to move to a direct current infrastructure is likely to end up being expensive and forcing the organization to be dependent on certain types of hardware. However, the vendors will drive a better energy efficient direction through optimizing DC use within their own systems.
ABOUT THE AUTHOR: Clive Longbottom is the cofounder and service director at Quocirca and has been an ITC industry analyst for more than 15 years. Trained as a chemical engineer, he worked on anti-cancer drugs, car catalysts and fuel cells before moving to IT. He has worked on many office automation projects, as well as Control of Substances Hazardous to Health, document management and knowledge management projects.
This was first published in September 2012