Why would I consider Open Compute server platforms in my enterprise? Isn't this just for the biggest data cent...
Every data center operator understands the value of standardization. Picking one hardware platform, software platform, or even a business policy makes everyday business much easier to run. Unfortunately, many data centers face challenges in integrating and maintaining platforms that are constantly evolving and introducing proprietary, vendor-driven features that may not integrate with other systems in the environment. Or, it may lead to vendor lock-in, which does not serve business interests, wastes money and time, and makes the environment costlier to service.
Several years ago, Facebook promoted the Open Compute Project (OCP) to develop a series of technology initiatives intended to create systems with better power efficiency and reliability and that are easier to use and maintain. The goal was to provide computing power without all the proprietary bells and whistles that hamper integration and product choices.
This is generally accomplished through the open release of project documentation, allowing any system manufacturer to build systems that adhere to Open Compute's functional and mechanical standards. For example, Open Compute has established custom and high availability motherboards using AMD, Intel, and ARM processors.
Related Q&A from Stephen J. Bigelow, WinIT
The Green Grid's electronics disposal efficiency metric can help companies better track what happens to technology flushed by a refresh.continue reading
An OpenFlow deployment requires switches and a controller that conform to the OpenFlow protocol, but interoperability is not guaranteed.continue reading
Developed by OpenFlow.org in 2011, the member-supported OpenFlow specification could significantly affect future data center networking procedures.continue reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.