New hardware, use cases shape server benchmarking needs

Advancements in server hardware help admins address the latest data-intensive use cases. But this brings questions about the place of industrywide benchmarking standards.

Organizations are becoming more interested in how hardware offerings can directly improve their workloads, but they may have questions around how to use server benchmarking standards.

"Ultimately, [with benchmarking], you're trying to get as close to real-world usage and workload so that you can make your architecture decisions in mind," said Eric Caward, senior manager of emerging tech at Micron Technology Inc.

The data center is shifting to include a variety of software-defined storage, cloud deployments and virtual machines, and changing server components and performance requirements. But just like infrastructure, server benchmarking is becoming more specialized and application-specific.

Companies such as Standard Performance Evaluation Corporation (SPEC) offer industry-standard tests, and big vendors have their own in-house technicians, yet the hardware landscape is changing as more organizations support specialized use cases and high-bandwidth data sets.

A look at server hardware

Graphic processing units (GPU) have supported consumer-grade graphics use cases for many years, but their use in the data center has expanded drastically. GPU cards' multi-threaded architecture is beneficial for high-intensity applications such as machine learning and artificial intelligence (AI). GPUs also help support large pools of unstructured data or analytics processing.

Organizations are also implementing more solid-state drives (SSDs). Traditional server benchmarking focused on the application, which was written for hard drives, said Ryan Meredith, senior manager for storage solutions at Micron Technology.

"Basically, organizations thought they just made applications faster just through brute force," he said. Though, over time, more licensed software companies have developed applications that include code to use SSDs; this ensures that organizations can benefit from using newer storage technology.

In addition to addressing growing storage needs, SSDs and flash-based storage help organizations reduce latency and run applications at full speed, though older software isn't necessarily optimized for SSDs or dynamic RAM (DRAM) architecture, and adding more hardware to run applications doesn't automatically improve performance.

"Five years ago, much of the software that we touched in our lives, they just weren't optimized for SSDs and hyperclass DRAM, and [organizations] just threw more hardware at it, and the applications picked up a little, but it wasn't a lot faster," said Jason Nichols, senior manager of product tech and planning at Micron Technology. "Now, a whole bunch of software has been optimized for those [new storage types]. And that [technology] is getting the most out of the software."

To effectively run applications on SSD-based servers, organizations must have software built for the hardware and include code to properly use storage components.

There will also be a future need for persistent or storage-class memory to address greater data access and support high-volume workloads, predicted Scott Sinclair, senior analyst at Enterprise Strategy Group.

"The next logical step is eliminating all the interconnect latency between the storage device and memory, the persistent memory and the processing," he said.

Another way to reduce latency and gain quicker data access is hyper-converged infrastructure (HCI). Benefits of HCI include easier resource provisioning, management and infrastructure monitoring. This bundled offering brings efficiency gains and a tighter bottom-of-stack integration, because clusters can provide faster application processing speeds.

Server benchmarking for new technologies

With new storage hardware, converged infrastructure and virtualization, admins might wonder how to benchmark and discern what new tech is right for their organizations.

Vendors or third-party research groups provide benchmarking results, which give a straightforward look at hardware processing capabilities, performance statistics and how the architecture scales over time. If there are specific use cases or niche hardware that is not readily benchmarked, organizations can -- and should -- conduct their own benchmarks.

Admins should also test asset utilization. This may require purchasing extra hardware or software, and admins can work with vendors to enable these metrics and look at how well the hardware supports any workloads, said Chris Hinkle, CTO at TRG Datacenters.

"Servers have a nonlinear response from an idle speed to a fully loaded CPU; there is a significant baseload. The largest efficiency gains both on literal resource efficiency and capital outlay efficiency live in how well you use the assets," he said.

However, vendors are seeing new types of use cases -- such as big data, analytics, machine learning and AI -- that admins wish to test beyond the SPEC or PassMark standards and figure out how server hardware can specifically benefit their organization's needs.

"There's a number of different benchmarks that are out there that can be used. And at least from the conversations I've had, there's so much specificity in applications, especially the high-value workloads that new companies are either developing [benchmarks] themselves are deploying for their specific industry, that it is very difficult to walk in with a standard," Sinclair said.

Along with these different use cases, admins can choose from a variety of storage and processing technologies to fit their workload needs but have different features, application experiences and total cost of management.

Admins should decide what metrics are most important to look at in benchmark reports during hardware evaluation, such as average and peak response times, uptime, count of threads and requests per second. This helps admins find the right balance of compute resources for their workloads, whether they test in-house or outsource everything.

The hardware type will also determine how admins run tests. With HCI, for example, admins must test the entire technology stack and account for all possible data inputs. This includes both hardware, software, networking connections as well as any off-premises resources.

Even with specialized hardware tests and use-case-specific benchmarks that organizations want, there is a need for currently available testing methods. Access to benchmarking results helps admins narrow down potential purchase options, see available server technology advancements and calibrate their data center infrastructure.

Dig Deeper on Data center hardware and strategy

SearchWindowsServer
Cloud Computing
Storage
Sustainability and ESG
Close