Facebook occupies a world very different from the average enterprise IT shop, but the social network’s energy-efficient Open Compute Project hardware designs could benefit all types of companies.
Energy-conscious IT shops are slowly transitioning toward free air cooling and making incremental changes to equipment in order to reduce power consumption but struggle to reach the energy-efficiency levels they want.
“Power is certainly a challenge for us,” said Charlie Gautreaux, a senior engineer for a large financial services company. “Each year we have projects to remove as much [equipment] as we can out of the data center.”
The company is already moving to 2.5-inch disk drive storage arrays for their power efficiency and has chosen power supplies as well as processors and memory based on their power profile, Gautreaux said.
“The whole notion of removing unnecessary components, whether inside a server, or the server itself, is always something we’re interested in,” he said.
Open Compute: Only for the 1% of IT?
The Open Compute Project (OCP), founded by Facebook after it built its Prineville, Ore., data center in 2010, open-sources the unconventional server hardware, rack and power supply designs used to get a Power Usage Efficiency (PUE) ratio of 1.06 at the facility. According to the Uptime Institute, the average PUE of a data center is 1.8.
The servers are built to a 1.5U specification designed to slot into a rack Facebook has named Open Rack. Open Compute designs available include a storage server, management software, motherboard, server chassis and power supplies. The specifications are on GitHub.
The buzz around the Open Compute project is growing with companies including Hewlett-Packard, Advanced Micro Devices Inc., Fidelity Investments, Salesforce.com, VMware Inc., Supermicro Computer Inc. and others pledging to support the project’s designs over the last six months.
But even if enterprise IT shops could rip and replace their server racks with the equipment Facebook uses, they still wouldn’t get the energy efficiencies that the social media giant has realized, critics point out.
That’s because Facebook had the means to build a greenfield data center in Prineville, as well as complete control over all aspects of the building’s design, from using 100% free air cooling to limiting the number of power transformations between power circuit and server inside.
“Open Compute is the greatest thing that the vast majority of people in our industry will never be able to take advantage of,” said Jeffrey Papen, founder of Peak Web Hosting Inc., based in Rancho Cucamonga, Calif., “This is for the 1% of IT.”
One expert compares Open Compute to NASA in the early 1960s.
“They were spending a lot of money on things specifically for three guys that would be up in space for a few days,” Mark Thiele, executive vice president of data center technologies at Switch Las Vegas, said. “None of that stuff was immediately and obviously of value to 99.99% of the rest of the population, but for years after the space program became real … things like Tang and Velcro and new metal materials began coming out of the work they were doing.”
A similar trickle-down effect will probably happen between Open Compute and enterprise IT, Thiele said.
Ultimately, Open Compute is just an experiment at this stage, according to Gautreaux. He said it will probably take a year before his team can explore the Open Compute technology further, and it will have to be a “bottom up” effort in his shop since there’s no sales force associated with the project to appeal to senior management.
“Would we just replace HP? Maybe, maybe not. Most likely not,” he said. But it might get HP to lower prices or lead to a new architecture for the company’s x86 servers.
“I think that’s one of the reasons we’d do this kind of crazy project, for those ancillary benefits,” said Gautreaux.
If he was to do an open hardware project, he’d want to “go all in” rather than taking a gradual step with a motherboard, Gautreaux said.
AMD Roadrunner to act as a middle ground
AMD wants to expand the reach of OCP with its stripped-down motherboard, codenamed Roadrunner, which fits into both Open Compute servers and conventional 1U, 2U and 3U server form factors.
Also, the Roadrunner design looks to conserve energy by reducing the number of components, leaving only what’s necessary for three specific use cases: high performance computing (HPC), enterprise virtualization and storage.
An HPC motherboard, for example, would have fewer DIMM slots to allow placement of higher-speed DDR memory closer to the CPU, rather than a virtualization motherboard.
Both would have fewer PCIe slots than the storage motherboard. All the designs would also place the CPU, heat sinks and other parts so they could be cooled easily from front to back.
AMD arrived at the Roadrunner designs by conferring with a dozen Wall Street firms, including Fidelity, the company said.
A design similar to Roadrunner is also being prepared by Intel Corp., codenamed Decathlete.
Intel officials declined to comment.
Beth Pariseau is a senior news writer for SearchServerVirtualization.com and SearchDataCenter.com. Write to her at email@example.com.
Dig deeper on Data center budget considerations