Manage Learn to apply best practices and optimize your operations.

On-server GPU additions help with big data, VDI

GPUs offload the burden of demanding computational tasks from central processors. Integrate GPU into enterprise servers to manage intense computing.

Your server CPUs have enough work to do. Leave the complicated mathematical tasks to specialized graphics processing...

units.

Traditional file and application servers are facing a new generation of computation-intensive workloads due to scientific computing, engineering modeling and an acute need for improved graphics performance for virtual desktop instances. These workloads rely on complex mathematical tasks and can easily overwhelm the latest standard processors.

Enterprise-class servers are turning to specialized graphics processing units (GPUs) to offload the demanding computational tasks needed to process big data, model new engineering designs in real time and render vast numbers of virtual desktops from the fewest physical server hosts.

Justifying a server GPU

Normally, graphics tasks (three-dimensional transforms, drawing, rendering and texturing) are handled through the system processor using software emulation. This demands significant processor time, which impacts the workload's performance.

A GPU provides the system with a dedicated processor to handle only graphic commands in the hardware. The remainder of the workload runs through the main processor. Since the graphics processing unit handles math and graphics instructions significantly faster than the main processor, the result is a dramatic improvement in workload performance.

IT organizations that serve molecular dynamics, quantum chemistry, defense, math and physics, computational finance, structural design and electronic modeling businesses will see substantial benefit in workload performance.

Server GPUs, however, are not yet standard features for most enterprise-class models, so the functionality must be added at the time of purchase or later as a server upgrade. This has several important implications.

An enterprise-class server GPU can raise the system price by several thousand dollars. Few data centers will invest in a full fleet of GPU-enhanced servers, placing restrictions on workload balancing and virtual machine migrations.

Adding GPU functionality to a server requires a clear understanding of each workload's needs, and a benchmark assessment of the GPU's performance benefits. There is no benefit in adding a GPU to an everyday file server -- which doesn't use math or graphics instructions -- but adding a GPU to a virtual desktop infrastructure server might double or triple the number of VDI instances that the server practically supports.

There is no benefit to mixing GPU-intensive workloads with normal workloads on the same server; a server fitted with a GPU will likely only host mathematically and graphics-intensive workloads. At the same time, migrating one of these workloads from a GPU-heavy server to a non-GPU system will decimate performance, because the workload would rely on software emulation to pass graphics or math tasks through the main processor.

Deploying servers with differing hardware capabilities and workload distribution/migration limitations is antithetical for organizations seeking to establish a uniform, flexible white-box data center infrastructure.

GPU adapters

The most popular approach to server-side GPU deployment is the addition of an enterprise-class graphics card such as NVIDIA's Tesla K40, AMD's FirePro S9000 or Intel's Xeon Phi coprocessor 7120P. Devices such as the K40, based on standard PCI Express (PCIe) 16x expansion slot architectures, are easy to plug into existing servers. In addition, an expansion card approach allows the IT team to replace or upgrade GPUs easily. An enterprise-class GPU card has no display connector, since it's intended for use in a server.

There is typically a large quantity of high-performance memory onboard enterprise-class server GPU cards, and the GPU has a huge number of individual cores. For example, the Tesla K40 includes 12 GB of graphics DDR5 memory. NVIDIA's GK110B GPU chip in the Tesla K40 provides 2,880 individual cores that can parallel-process graphics instructions and boost performance for demanding applications or across multiple workloads. 

GPU cards demand a lot from their host server. Space is the first limiting factor -- cards like the K40 are so wide, they require two PCIe slot spaces. This is a problem in tight 1U systems with only one or two PCIe slots available; it might be impossible to install a GPU card alongside a multi-port network adapter, Fibre Channel storage adapter or other device.

GPU cards can also demand 230 to 300 additional watts of power distributed through one or two separate PCIe power cables. This means the server's power supply must have ample spare capacity, which goes against the conventional wisdom of minimal power overhead to improve energy efficiency.

CPU and GPU integration

Although expansion cards are the most popular and well-established means of integrating GPU functionality into enterprise servers, some next-generation processors include GPU chips directly alongside central processing unit (CPU) chips in the same package -- sometimes called GPU-on-CPU or GPU for general-purpose computing. GPU integration is a convenient way to add modest graphics processing to servers on a broad scale without multiplying costs or power consumption.

AMD kicked off this trend with its Fusion processor line, and Intel has implemented graphics capabilities in its Westmere, Sandy Bridge, Ivy Bridge and Haswell architectures for Xeon E3 1220 through 1286 processors (such as the Xeon E3 1286 v3 using Intel's HD Graphics P4700).

The principal advantage to this integrated approach is simplicity. Graphic capabilities are available for the server without adding power-hungry expansion devices. This, however, is generally the only advantage.

Integrated GPUs have few GPU cores. This severely limits the performance boost. While dedicated GPU cards sport the fastest memory, integrated GPUs must share everyday DDR3 memory with the system.

In most cases, servers with integrated GPUs provide a useful boost to visualization workloads. But, they are generally inadequate for the most demanding scientific computing or modeling tasks.

Deployment is also an issue. New GPU-on-CPU processors require an entirely new server motherboard, chipset and BIOS. It will take a server technology refresh cycle to introduce GPU-capable processors into the data center.

This was last published in September 2014

Dig Deeper on Data center capacity planning

PRO+

Content

Find more PRO+ content and other member only offers, here.

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchWindowsServer

SearchServerVirtualization

SearchCloudComputing

Close