Large scale data center deployments, facilities housing thousands of systems in one installation, have very different needs from a small business with a couple of servers. A method used to meet those needs and one that is gaining traction is the adoption of direct current
According to Robert E. McFarlane, president of the Interport Financial Division of New York-based Shen, Milsom & Wilke Inc., the technology ultimately runs on DC anyway. The thought is that it makes sense to deliver DC in the first place, rather than going through the inefficiencies of putting power supplies in every device to convert the alternating current (AC) to DC.
Power is an issue because processors are getting faster, which makes them run hotter and draw more power. A power budget of 10,000 watts for a cabinet might have been accurate four years ago, but now a dense cabinet can run 20,000 to 30,000 watts, according to Geoffrey Noer, senior director of product marketing, Milpitas, Calif.-based Rackable Systems.
With a finite power budget per square footage, the result is a lot of half populated cabinets.
What is it?
McFarlane explained that by using DC distribution, there is only one conversion from the AC building main power to DC via rectifiers of one kind or another, which saves energy. The DC is fed directly to the servers and switches, and also keeps the batteries charged. This has been the approach in the telco industry for decades on high-end PBX equipment.
At a data center level, 480 volt lines of AC power come from the power plant. Those lines go through the power distribution unit (PDU), which can be either a fly wheel or a battery backup. From there, different architectures are possible.
One architecture is that you have a big PDU converting power coming into the facility from AC to DC. Maybe 10 cabinets would be connected to that PDU. According to Noer, that's the most efficient way of doing it.
For a data center that's already built out for AC, the method would be to put rectifiers at the top of the rack or contain rectifiers for multiple cabinets in one enclosure -- that may even be in a different room to contain the heat -- and then distribute the power like a PDU.
"Putting common power supplies into the rack for is a way to build one robust power supply and distribute DC through the rack or frame to the various devices. It not only makes sense from an energy standpoint (every conversion carries with it some percentage of inefficiency) -- it also significantly reduces space for each device," McFarlane said.
Who uses it?
This technology is primarily interesting to large scale data centers. For example, Data393, an IT infrastructure and hosting provider based in Englewood, Colo., uses Rackable's DC infrastructure in their data center.
Data393 had been running White Box Intel x86 standard servers. According to Steve Merkel, senior systems engineer at Data393, the biggest issue the facility faced was density. The data center has a finite amount of floor space -- 13,000 square feet -- and the company is trying to make that last.
"Before with switching gear and other devices, we could put 24 servers in a rack at the optimal level," Merkel said. "DC power allows us to get more capacity for our cooling dollar -- 60 to 80 servers per rack."
Data393 does the rectification from AC to DC in a separate room. And a lot of the migration issues for the system revolved around the DC power plant. According to Merkel, it wasn't necessarily a big change because the company made use of existing DC infrastructure.
Merkel said to build out a DC power plant is a major investment, and that many people shy away from that.
Who makes it?
Major vendors, such as Sun Microsystems Inc., have made forays into this area, but currently Rackable is one of the main advocates of DC.
"Rackable has a lot of very smart people who know the thermal properties of their hardware," Merkel said. "They're real geeky. Vendors in the past haven't had the level of detail they offer."
McFarlane also said some manufacturers are using the common power supply approach for racks housing 1U servers and blades.
What will it do for you?
According to Noer, data centers using DC power can save 20% over the same system with an AC power supply while systems are in use. When the systems are idle, equipment draws 50% on an AC power supply. In that situation, AC efficiency nose-dives. This is not the case with DC power, which can gain efficiency when equipment is idle.
Besides power savings, another factor that is very substantial is availability. According to Rackable, there is a 70-times difference between the reliability of an AC power supply and its DC conversion card that replaces the AC supply.
"You're basically taking one of the least reliable components of the server and removing it out of the equation with a more reliable part," Noer said.
Another advantage is that removing the power supply from the server eliminates a lot of heat in the chassis.
Though the benefits seem significant, McFarlane offers caveats.
"The concept is fine, but one would need to take a close look at the economics. [Rackable] claims that it's more cost-effective than conventional UPS and AC distribution," McFarlane said. "We can't comment on the validity of the claimed cost savings. As an opinion, however, we would have to say that in a large enough installation, the distributed DC would certainly seem to make sense, compared with a central DC plant, but the economics should be examined very carefully."
The last question McFarlane raises is whether the overall savings justify going to DC servers and network switches.
"We have one client in particular that prefers them for reliability, but they have staff well trained in DC installations," McFarlane said. "You can't just plug them in. Every device needs the right wire size, which has to be calculated, and devices are actually wired in, not just plugged in. IT managers are having a hard enough time learning what they need to manage power with AC equipment. It takes a very sophisticated operation to work effectively with DC."
Let us know what you think about the story; e-mail: Matt Stansberry, News Editor