Imitation is the sincerest form of flattery and the idea of copying the infrastructure used by giant cloud providers is catching on. The largest cloud providers rely on next-generation hardware platforms to offer high performance at a competitive price. Increasingly, these cloud providers are opening up and partnering with mainstream hardware vendors to drive the industry forward.
This trend promised to make this year's Open Compute Summit very lively, and we weren't disappointed. From Open Compute servers designed to be cool and efficient to blindingly fast LANs, there was something for everyone.
Google ports software
Google challenged Intel's quasi-monopoly in the commercial off-the-shelf (COTS) server CPU space with a server based on IBM's Power processor. Google has ported much of its software to Power, opening up some new configurations for its platform choices, while offering a viable competitor to Intel, which should keep chip pricing under control.
IBM will soon roll out its next generation Power9 CPU, with 24 cores using a 14nM FinFET process. The chip, due in 2017, will have onboard accelerators for encryption and compression, making it is a serious competitor to anything from Intel in that timeframe. Power9 is the CPU of choice for two major US supercomputer projects, one of which -- Summit -- is aiming for roughly an eight-fold petaflop increase over the current worldwide leader, China's Tianhe 2.
The Power chip has high-speed I/O, including NVLink 2.0, an important feature for supercomputing. Summit gets most of its 300 petaflops from GPUs. But don't expect China to stand still in the supercomputing race. Now that IBM has released the Power architecture into the public domain, there's talk of Power9 versions with 7nM features being built in China.
Google and Rackspace are developing Open Compute servers for the Open Compute Project to use Power9 CPUs based on 48V rack architecture. These mean that Intel won't have free rein on pricing CPU horsepower, a concern that the mounting ascendency of Intel-based COTS servers creates. This will also create the pressure needed to stimulate innovation, which threatened to slow down as we approach the limit of silicon physics and the complacency of Intel's utter market dominance.
The move from AC
Google's release of six server blueprints the company uses internally -- all based on the 48V architecture -- is a move to drive mainstream industry forward. This is Google's first foray into releasing open designs and it tackles a vexing issue the server industry faces. Mainstream power delivery to servers is alternating current (AC) line power at 120 or 240 VAC. This means a bulky and inefficient power supply is needed in each server.
Cloud service providers have shied away from the AC approach because of physical space and efficiency concerns. Large power supplies that can directly use three-phase 400+VAC power are as much as 10% more efficient and easier to make redundant. This approach to power delivery leads to more compact, cooler running Open Compute servers. There are further size savings if solid-state power components, such as Vicor's new 48 Direct-to-PoL modules are used, making large, inefficient 48-volt supplies unnecessary.
Intel is also taking the Open Compute Summit seriously. Its acquisition of Altera's field-programmable gate array (FPGA) technology is bearing fruit, with XEON-D chips that have built-in FPGAs. The FPGAs will be open to third-party programing and the idea is to provide hardware accelerators for various environments, including encryption and compression.
Intel has released a set of reference designs for systems using large numbers of non-volatile memory express (NVMe) drives, clearly aiming at storage and hyper-converged systems. These would seem to fit a more pressing need than the Power9 architectures, since the clear performance of NVMe is out of the ability of current system designs to handle more than four drives.
For bread-and-butter users, Intel's Broadwell XEON-D chips have gotten a 65W 16-core member that is ideal for single or two CPU half-wide 1U servers, which are the mainstay of general-purpose clouds and clusters. These are intended to stave off the threat from ARM processors.
On the networking front, Mellanox is feeling pressure from Intel in the form of Peripheral Component Interconnect Express (PCIe) fabrics and Omni-Path. PCIe via a multi-port switch is getting traction as a way to connect lots of drives to a system. It could then evolve to interlink clustered systems, which enters InfiniBand territory.
Still, for the next couple of years, PCIe is locked in to just being a short-haul connection scheme. Omni-Path is different. This is a very low latency 100 Gbps link system that looks much like InfiniBand. The twist is that Intel plans to deliver the high-end Knight's Landing processor with Omni-Path onboard the CPU, clearly aiming at the upper end of the market.
Mellanox responded by unveiling a doubling of InfiniBand and Ethernet line speed, to 200 Gbps at the Summit. It's a few months behind Knight's Landing, but at twice the speed of Omni-Path, and a two-year window before Intel catches up, it's a very competitive play and should keep the loyalty of users, such as financial traders, who demand the ultimate in performance.
Microsoft opens its SONiC
Microsoft is also entering the open fray, with an open source Linux-based network operating system named SONiC (Software for Open Networking in the Cloud) aimed at running low-cost whitebox switches. SONiC is based on the Azure Cloud Switch architecture -- another example of a major cloud provider open sourcing its toolkit. Cavium demonstrated SONiC-compatible silicon for switches, the XPliant family, targeting 3.2 Tbps switching for up to 100 Gbps links. This design will underpin software-defined networking, to result in much cheaper high-performance switchgear, which is critical for economics in the cloud.
Storage also took its share of accolades. Seagate said that it now has the fastest drive ever. At 10 GBps, the Nytro WarpDrive is a PCIe flash card that is Open Compute Project compatible.
Giving Intel some competition for performance, Diablo and Inspur demonstrated NVDIMM-F memory systems using the super-fast memory bus as an interface. These may be a precursor to Intel boasting up XPoint memory Non-Volatile Dual In-Line Memory Modules (NVDIMMs) at the next Summit. With COTS standards being so tightly defined, Intel will have no trouble adding its new memory design to Open Compute servers.
All in all, Open Compute servers and other technologies are making good progress. The reference designs, available from multiple vendors, will increasingly become the norm for server buys, even for midrange users. The low prices and high quality that manufacturers offer, coupled with the interchangeability of COTS, will lower the cost of computing considerably, profoundly impacting vendor dynamics in the IT industry.
Jim O'Reilly is currently a consultant, focusing on storage, infrastructure and software issues.
An inside look at Facebook's Open Compute Project designs
Find the facts on the Open Compute Project
Discover the hidden costs of open source cloud computing
- E-Guide: Key Differences Between Virtualization and Cloud Computing –SearchDataCenter.com
Dig Deeper on Virtualization and private cloud
Processors re-imagined: Will cloud and AI lead mean all change in the datacentre?
France, HPE are building Europe's most powerful AI supercomputer
IBM Power9 Server
IBM Power9 servers add reliability, reduce maintenance costs