Senior Technology Editor
Published: 04 Nov 2013
At its core, the RISC processor offers energy efficiency and compute power that may tempt server vendors. But certain limitations may prevent RISC from ousting x86 from the data center throne.
Improving processor design
Part 2: Reintroducing the RISC processor for data center servers
Reduced instruction set computing (RISC) is really an old computing idea that appears in every dedicated computing appliance like printers, routers and industrial controllers -- any device that performs limited jobs and doesn’t need a wealth of diverse instructions. But the relentless drive toward data center energy and computing efficiency has underscored the realization that an enterprise runs many mundane applications like DNS servers, Web servers, file servers, security gateways and even cloud computing tasks. Applying an x86 processor to all of these simple, yet very important, business tasks can be better left to RISC-based servers that use less energy and execute suitable programs efficiently.
The concept of RISC processors is to strip out unnecessary or superfluous instructions in order to simplify the processor; lowering its energy needs and improving performance. As such, while RISC processors are typically associated with scale-up Unix systems, they are also found in responsive, energy-sipping mobile Android devices. This has spawned significant interest in data center systems for relatively simple applications like Web servers or storage servers that only run programs with a limited set of instructions.
There are countless RISC processor designs, but the most recognized and widely deployed RISC processors are based on designs licensed from Advanced RISC Machines (ARM). ARM’s version 8 architecture (ARMv8) added 64-bit support, cryptographic instructions, improved memory and cache management, and better support for single-instruction, multiple-data (SMID or “multimedia”) tasks, among other features. ARMv8 is the basis for ARM’s Cortex-A50 series of processor reference designs. Major processor vendors are taking note, including AMD, which is developing its upcoming “Seattle” processor around ARM’s Cortex-A57 core. ARM systems are usually run with software developed for Linux.
Other examples of RISC in data center systems include the TILE-Gx processor family from Tilera. The TILE-Gx family is optimized for multimedia and networking tasks and range from nine to 72 interconnected cores on a single SoC package. Tilera supports open source software development in Linux and other standard environments. Another player is Calxeda with its EnergyCore ECX-1000 family of ARM-based SoC processors intended for high processor scalability in data center applications.
Continued CISC development
While purpose-built RISC processors can fill vital niches in the data center, the need for established complex instruction set computing (CISC) computing, such as x86, will also grow as virtualization and consolidation allow servers to support a great number of diverse computing workloads.
The issue with conventional processors like x86 is that many of the instructions the CPU handles are used infrequently or for specific applications (like SSE3 instructions for 3-D tasks or Intel-VT instructions for virtualization tasks). Yet every instruction adds thousands (perhaps tens of thousands) of transistors to the processor and adds latency to the instruction pathway -- the processor uses more energy and takes longer to process every instruction. Thus, using a traditional CISC processor for simple programs that only use a small fraction of the processor’s instruction set can waste the x86 processor’s potential.
Currently, one of the easiest ways to add computing power is to add cores to the processor’s package, and today’s multicore designs will include even more cores in the next few years. Continued improvements in the chip fabrication process will facilitate this. An example is Intel’s move to 22 nanometer fabrication allowing for smaller transistors that demand less power and can be stacked (rather than fabricated only side-by-side) for smaller die sizes. Thus, each die is smaller and runs cooler, so more dies can be integrated into a processor package and allow a virtualized server to run more workloads for better system utilization.
But there are also a host of new capabilities coming to server processors, and the emphasis seems squarely focused on improved security, reliability and memory support. For example, Intel’s modern processor design, the Xeon E5 family will offer improved hardware support for data center management software tools along with an OS Guard feature designed to prevent malicious attacks from running outside of the workload’s memory space. By comparison, Intel’s newest Xeon E7 family promises support for up to 12 TB of server memory to run the most demanding computing or transactional workloads. The new E7 family will also provide reliability and self-healing capabilities that will allow the server to recover from CPU or memory errors that would crash previous systems. AMD is also rolling out new variations of its Opteron into 2014 with “Berlin” and “Warsaw” processors that all support advances in memory architectures such as dual 64-bit DDR3 with ECC along with other advances in system I/O like PCIe 3 and USB 3.0.
And don’t overlook the role of industry initiatives in mainstream processor development. Although the IBM POWER processor has been losing market share, IBM joined with Google, Mellanox Technologies and Tyan in August of 2013 to form the OpenPOWER Consortium. The collaboration is a bid to spur new development of the POWER processor and related systems for server, networking, storage and graphics acceleration. Just as ARM licenses its processor intellectual property (IP) for development and manufacturing, the OpenPOWER Consortium hopes to license the POWER processor to chip hardware and software developers. This can potentially add a new spin on mainstream processor development leading away from proprietary vendors towards a more community-developed environment.
Matching the tool to the task
Traditional x86 processors can handle almost any enterprise workload. But every workload has different needs, and IT professionals are realizing that throwing the latest Intel Xeon or AMD Opteron at every computing problem may not be the most effective or efficient computing solution. Traditional x86 computing is still growing as the mainstream answer for many business tasks, but modern processor design will see a new generation of purpose-built SoC processors, and renewed interest in the RISC processor, promise far more diverse processor choices--allowing an enterprise to deliver the appropriate amount of cost- and energy-efficient computing power where it’s needed.
- How to prepare for serverless architecture –TechTarget