Server consolidation tempered by tech trends, uptime goals

Going overboard with server consolidation is as bad as not doing enough, but IT trends are keeping unchecked growth under control.

This Content Component encountered an error

When it comes to server consolidation, how much is too much? How much is too little? If you’re using virtualization, that can be a surprisingly difficult question to answer.

In the early days of virtualization, the goal for how many virtual machines (VMs) to put on a server was simple. "Stuff as many VMs on to a server as it can possibly hold," reasoned IT managers, to get maximum bang for your hypervisor software buck.

ABOUT THIS ARTICLE

This feature originally appeared in the February 2012 Virtual Data Center e-zine.

But that was then, when virtualization was relegated to handling low transaction, lightweight workloads. These days, virtual servers host an increasing array of mission-critical applications that can’t go down, and certainly not for simple reasons like poor capacity planning. To a large extent, that has put the brakes on ridiculous VM-to-server ratios, as punch-drunk IT managers come back to Earth and accept the value of proper resource allocation, uptime and capacity planning.

Dreams of maxed-out VM-to-server ratios also predate July 2011, when VMware Inc., a leading virtualization provider, introduced a new pricing model that encourages IT managers to keep an eye on resource consumption. Whereas VMware used to sell its vSphere suite on a per-processor basis, with no regard to how many VMs ran on a host, the vSphere 5 suite includes a “vRAM” allotment that limits the amount of physical memory that can be used per license by virtual machines. Since in many respects virtual machines are bound by physical memory, this new licensing model limits the number of VMs that can be cost-effectively run on a server.

Thus far, VMware is the only virtualization vendor to adopt a resource-based pricing model, and other virtualization vendors cite their commitment to strictly CPU-based pricing as a competitive advantage. But the writing is on the wall: As workloads move to an increasingly virtualized, cloud-based model, expect vendors to charge for their wares according to underlying resource consumption.

Meanwhile, infrastructure vendors continue to introduce ever larger, more virtualization-friendly servers that make it easy to stuff dozens and dozens of virtual machines onto a single host and diminish the need for optimized VM sizing and placement. But high consolidation ratios come at a price, not just in terms of hardware and licenses, but also in terms of uptime. The failure of a highly consolidated, improperly configured server can have dramatic consequences for application availability and uptime.

Varied VM-to-host ratios hurt server consolidation
With that as the backdrop, what kinds of VM-to-host ratios are IT administrators working with these days? The answer is, not surprisingly, it depends.

Back in the days of VMware ESX 3.x, a good rule of thumb was four VMs per core, said Joe Sanchez, an IT manager at hosting provider Go Daddy. Given a dual-processor, quad-core server, for example, that resulted in about eight VMs per host or an 8:1 consolidation ratio.

These days, most hypervisors can theoretically support higher numbers of VMs per core, but even so, four VMs of one or two virtual CPUs (vCPUs) per core is still a good guide if balanced performance is the goal, Sanchez said.

“The new servers and ESX versions can handle more VMs,” he said, “but the CPU wait time is still affected and can cause performance issues with too many VMs waiting on the same core.”

And if performance isn’t a concern, what about test and development environments? “Then load the cores up until the cows come home,” Sanchez said.

Walz Group, a provider of regulated document management services, follows that model and uses very conservative VM-to-host ratios for production systems, while it uses much higher ratios for environments such as test and development and quality assurance.

“On production systems, we hardly ever run more than 15 VMs per host,” said Bart Falzarano, chief information security officer at the Temecula, Calif., firm, which runs VMware, Cisco dual-processor, four-core UCS B-series blades and NetApp storage configured in a certified FlexPod configuration.

Outside production, however, there are no such restrictions, with consolidation ratios often reaching 40:1, said Falzarano. He said he knew of environments at other organizations that drove VM densities much higher—in the neighborhood of 100:1.

Server consolidation, Intel and AMD
To a large extent, today’s increased VM densities are nothing to crow about — they’re largely the result of increased server core counts and not any magic on the part of virtualization providers or practitioners.

Indeed, after reviewing customer usage data over time, virtualization management vendor VKernel found that virtualization shops’ increased VM densities track very closely to increases in server core counts.

“I realized that the great consolidation ratios from virtualization you are seeing in your data center have little to do with more efficient use of CPU and memory,” Bryan Semple, VKernel’s chief marketing officer, said in a blog post. “Rather, the ratios have almost everything to do with Intel’s ability to increase core density per host.”

Indeed, current Intel Xeon E7 Westmere processors feature up to 10 processor cores, and the recently released AMD Opteron Interlagos has up to 16. With this kind of horsepower under the hood, it’s possible to approach 100:1 consolidation ratios on a scaleout server without breaking a sweat. Or, as VKernel’s Semple put it: “Please send Paul Otellini, CEO of Intel, a thank you note.”

The concepts of clusters and resource pools have further served to diminish the focus on individual servers and their configurations. “We don’t think so much in terms of servers, but in terms of an overall resource pool,” said Adrian Jane, infrastructure and operations manager at the University of Plymouth in the UK. The university sizes a server to be able to host its largest VM, currently an eight-vCPU machine with 24 GB of memory running Microsoft Exchange and let VMware Distributed Resource Scheduler handle the rest.

Let us know what you think about the story; email Alex Barrett, Executive Editor at abarrett@techtarget.com, or follow @aebarrett on twitter.

Dig deeper on Blade servers

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchWindowsServer

SearchEnterpriseLinux

SearchServerVirtualization

SearchCloudComputing

Close