Facebook's Open Compute Project is emerging as a force to be reckoned with, with several data center industry giants actively pledging their support and developing new products around the spec.
At the fourth Open Compute Summit held in Santa Clara, Calif., this week, Intel Corp. and Advanced Micro Devices Inc. (AMD) both delivered new products based on the Open Compute Project (OCP) -- Facebook's server and data center design specs that were open-sourced 18 months ago.
At the behest of the financial services firms Fidelity and Goldman Sachs, competitors AMD and Intel developed a modular motherboard that could fit into the OCP Open Rack infrastructure, as well as standard-issue rack environments.
The idea for the modular motherboards came about following the initial Open Compute Summit in October 2011. There, AMD and Intel engineers met with financial services executives who expressed interest in Open Compute hardware -- but only if it worked in their existing data centers. "They told us that they were constrained by their existing infrastructure, and that they wanted to see Open Compute hardware for their existing rack infrastructure because they didn't have the luxury of retrofitting their data centers," recalled Bob Ogrey, an AMD fellow and cloud technical evangelist who specializes in server platform architecture.
The resulting projects, AMD's Roadrunner and Intel's Decathlete, are both available to early adopters and will be in production this quarter.
Further, the AMD motherboard can be configured as a cloud, high-performance computing or storage server, depending on server, networking and management options. That's important to financial services organizations that are trying to lower the cost of compute by building large internal private clouds, in addition to existing grid computing clusters. "They want to be able to build cloud and grid computing on the same platform," Ogrey said.
Open Compute in the wild
Indeed, as of this summit, Open Compute designs are no longer limited to Facebook's own data centers. Gaming provider Riot Games said it would purchase OCP servers built by original design manufacturer Hyve Solutions, while cloud computing provider Rackspace said it is tweaking OCP designs for use in its environment.
For example, Rackspace took the Open Compute Open Rack specification originally designed by Facebook, and modified it for its data center. The resulting design features higher peak power, forgoes DC power, supports conventional switches and has a cable management bay.
Working with such suppliers as Quanta, Delta and Wiwynn to produce modified OCP designs has cut costs and sped time to market, said Mark Roenigk, Rackspace chief operating officer, during a keynote. Capex savings are approaching 40% over existing designs, and operational savings are projected to be 50% to 52%, he said. Those savings are significant, "but the speed at which we've been able to implement is every bit as important," he said.
Ecosystem, projects swell Open Compute Project
The scope of the OCP also has increased. To date, the project has overseen work on servers, rack design (Open Rack), storage (Open Vault), power and hardware management. Recently, the group added a Compliance and Interoperability group for testing and certifying OCP solutions.
At the summit, Intel also committed to contributing designs for its forthcoming silicon photonics technology, a networking technology that will enable 100 Gbps interconnects inside and between racks.
But true to OCP's roots, it's still Facebook that is leading the charge with new contributions: modifications to Open Rack and Open Vault for use in cold-storage environments, a new, all-flash database server ("Dragonstone"), and the latest version of its "Winterfell" Web server. Facebook also contributed its "Group Hug" board, a common slot architecture specification for motherboards that can accommodate as many as 10 systems-on-a-chip, or SoCs, from a variety of vendors.
The lack of a common socket has long been a galling problem for hardware designers, said Frank Frankovsky, director for hardware design and supply chain at Facebook. "It's always driven me crazy that we couldn't design to a common socket," he said.