Within the computing industry, a plethora of concepts exist that -- in a vacuum -- seem bland in terms of purpose, but when observed within the grand scheme of an information system, seem indispensable to the entire operation.
Take the graphics processing unit (GPU). Since 2003, the GPU has been a boon to computer gaming, 3-D imaging and other high-performance visualization applications.
Traditionally installed on end-user devices such as laptop and desktop computers, the GPU recently appeared in various data center architectures -- perhaps even yours.
What kind of PU is right for you?
A data center offloads computer processing from end-user devices to a remote central repository. So why not offload some of the offloading with virtual GPUs (vGPUs)? As virtualization technology proliferates, a logical extension is to place a GPU within a virtual machine (VM). The GPU takes some of the computation burden off of the central processing unit (CPU).
NVIDIA presented a new wrinkle to the vGPU market in 2012 with the introduction of the VGX Platform, which allows multiple VMs within a virtual desktop infrastructure to share a GPU on a remote machine, like a data center server. This development is analogous to multiple operating environments residing on several VMs that share one physical server host. The cost savings are advantageous.
GPUs have hundreds, even thousands, of cores, which allows for n number of parallel computations. This is more efficient than CPU, but at the cost of brains. The cores within a GPU tend to be simpler and individually much less powerful than their CPU counterparts.
IT organizations need to know when to rely on a virtual GPU architecture in the data center rather than simply enhancing the traditional CPU. High-performance graphics processing is exceedingly expensive computation-wise. With the serialized operations inherent to CPUs, high-performance graphics processing requires some beefy machinery with an even beefier CPU presence. The GPU handles repetitive processing in a parallelized fashion.
If an organization requires the data center to support exceedingly powerful computation, that IT shop should focus expenditures on adding CPU capacity, as opposed to GPU architecture. For example, a company that employs a cadre of mathematicians to work on high-end encryption and cryptography is not as concerned with graphics, but favors raw output.
Other industry verticals, such as the oil and gas sector, are increasingly reliant on real-time visualization. For exploration tasks, geologists within this industry must view underground seismic activity in real time. This graphics-intensive processing fits in the GPU architecture's capabilities. Rather than using GPUs on local devices while experts are out in the field, the trend is to place the GPU architecture in a data center, reducing the size of the devices in the field and allowing for virtual access to the benefits of GPU processing.
About the author:
Brad Casey is an expert on network security with experience in penetration testing, public key infrastructure, VoIP and network packet analysis. He also covers system administration, Active Directory and Windows Server 2008, with interest in Linux virtualization and Wireshark captures. He spent five years in security assessment testing for the U.S. Air Force. Contact him at email@example.com.
Inside NVIDIA's vGPU architecture