Manage Learn to apply best practices and optimize your operations.

High server throughput leads to better utilization, lower costs

Higher throughput means fewer servers to handle a given workload and potentially lower licensing costs as well.

To maximize an investment in compute, networking and storage systems, IT buyers should pay close attention to the...

characteristics of the applications that they intend to run, with a goal to maximize server throughput.

The best return on investment comes from pushing information systems to maximum capacity. Better throughput -- data packets per second -- is achieved with balanced performance across a given system. Data flows smoothly between processors, memory and I/O subsystems, maximizing server utilization. The smallest number of necessary servers completes a given workload, meaning fewer servers and fewer software licenses to purchase.

The ebb and flow of an application workload is evidenced in emerging server configurations. General purpose servers can't compare for certain applications, such as big data processing.

Channeling data

Traditionally, server designs take data from an I/O subsystem and place it in memory, where it is processed the quickest. Hardware vendors have started to design systems that feed data to the CPU through special channels, such as the IBM Data Engine for NoSQL - Power Systems Edition, to ramp up server throughput.

IBM created the Coherent Accelerator Processor Interface (CAPI) pathway into its POWER8 microprocessor architecture. CAPI's high-speed channel interface is accessible by I/O devices and/or other CPU types. Using the CAPI interface, large volumes of -state disks can, for example, communicate directly with the POWER8 CPU. CAPI eliminates the movement of large volumes of data through a memory subsystem as in other server hardware setups. It also prevents the memory and I/O subsystems from swapping in and out data -- it goes directly from storage to the CPU for processing. Speeding up processing with CAPI significantly increases workload throughput per server.

The workloads that most benefit from this hardware setup are superscalar processors or high-performance key-value store (KVS) nonrelational databases. An IBM Data Engine for NoSQL - Power System Edition can replace an environment of 24 x86 servers running the same workload while using 12 times less space and energy. The improvement in server utilization brings down cost per used by a factor of 3.2 times.

Big data, Hadoop, machine learning and bioinformatics applications benefit from a field programmable gate array (FPGA) accelerator with a POWER8 processor. Programmable chip maker Xilinx exploits the POWER CAPI interface for these applications via FPGA products such as a KVS acceleration application and an OpenPOWER CAPI acceleration solution for big data processing from Alpha Data. Both products process workloads exponentially faster than solely relying on general purpose processors.

Intel also integrated an FPGA on its Xeon chip, reporting a 20 times performance boost for its x86 server microprocessors.

A CPU for every workload

Multiple CPU types within the same systems architecture promises process-specific types of work. Data centers can separate workloads as serial, parallel or compute-intensive, all possible on x86, POWER, System z and other traditional processors. In some cases, however, other processor types would more efficiently handle elements of given workloads.

Many enterprises move data from centralized server environments to distributed servers or data warehouses for faster processing. They extract, transform and load (ETL) it onto target data warehouse systems. Specialized FPGAs, such as those used on the VelociData high-speed data streaming appliance, feed data to back-end x86 processors. By flowing data directly to waiting CPUs, the application does not have to index, lock or otherwise manage data -- saving processing time and speeding up the time it takes to obtain results. This is an example of using FPGAs and x86 processors in unison -- the higher server throughput accelerates the ETL process.

In an application with highly parallel operations, IBM's Linux-based S824L server makes use of NVIDIA-based GPUs. This hardware best suits Java, big data and technical computing workloads; it greatly accelerates server throughput for these parallel-processing applications as compared to general purpose processors.

The applications that exploit these new accelerator architectures need to process data rapidly, and sometimes in parallel. Serial applications, such as email and messaging, will see little benefit from an accelerator-based server architecture. The benefits in server throughput and utilization will be felt by users, such as data scientists, who want real-time access to query results.

Enterprises can save big money and time by running workloads on servers best suited to execute those workloads. So choose application hardware carefully. The goal should be to push server utilization for maximum capacity at the highest throughput possible.

Next Steps

Next-gen server roadmap climbs performance hill

A look inside FPGA servers

Demystifying the server buying process

This was last published in December 2015

Dig Deeper on Server hardware strategy

Join the conversation

1 comment

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

What mix of servers gives you the best throughput in the data center?