Data center consolidation can rely heavily on the power of hardware, but if a new server isn't in your IT shop's near future then virtualization remains an option. The trick is to keep the virtual machines at bay.
As virtualization becomes more prominent — and as the lingering recession chips away at IT budgets — the need to keep an eye on optimized server consolidation ratios is increasing, said Alex Rosemblat, VKernel product marketing manager.
ABOUT THIS ARTICLEThis article originally appeared in the February 2012 Virtual Data Center e-zine.
Rosemblat said an overwhelming number of virtual machines (VMs) are overprovisioned with memory and virtual CPUs from the get-go — not because of administrator error, but because application owners often insist on more resources than they require.
Another common misconfiguration is virtual machine memory limits, which are sometimes set and then forgotten. That becomes a problem when administrators trying to fix a performance problem assign the VM more memory, not realizing that the limit is preventing it from actually using that extra capacity.
When IT shops first began adopting virtualization, the return on investment was so dramatic that few thought to question whether they could eke more savings out of the environment, Rosemblat said. Take a 100-host environment consolidated down to 20 hosts at a rate of 5:1. “Even with that relatively low density, people were happy with a really great return on investment,” he said.
Fast-forward a couple of years. “People have gotten used to running with only 20 hosts, and costs are creeping up,” he said. Making matters worse is the ease with which new servers are deployed with virtualization — resulting in so-called virtualization sprawl.
Real-world requirements keep data center consolidation in check
IT managers cite very real concerns about uptime that prevent them from driving deeper consolidation ratios.
For example, the University of Plymouth in the UK runs its virtual environment in an active-active configuration between its primary and leased data centers a few miles apart, said Adrian Jane, university infrastructure and operations manager. It strives to run at no greater than 45% utilization, so that if one site were to go down, the other site could take over its entire load with some room to spare.
In addition, the organization leases its equipment, which it replaces wholesale on a four-year cycle. That means that when it comes time to purchase new servers, they must be sized to handle a total site failure as well as four years of growth.
The university went through that server-sizing exercise last year. The team determined it would need a pool of 180 cores to support its workloads and ended up purchasing 384 cores, distributed across 32 two-processor, six-core IBM BladeCenter HS22 blades.
That may seem like overkill, but Jane hopes overbuying up front will help avert falling short of resources when it comes time to upgrade. Last year, when it came time to upgrade, the university was running at 55% capacity, which meant it wasn’t possible to upgrade its systems by simply failing over to the secondary site. “We had to choose which VMs to take down, and it was very painful,” he said.
Will cloud come to the rescue?
As with other intractable data center consolidation problems, cloud computing is being pitched as a possible solution to the problem of how to balance server consolidation ratios.
The University of Plymouth’s servers won’t be coming off lease for another three years, and perhaps by that point, Jane said, cloud computing will have matured to the point where overprovisioning servers is no longer necessary.
Rather than buying extra capacity, “What I would like is a cloud-based resource topper,” Jane said, to which the university could burst in short- or long-term fashion when extra capacity was needed.
That ecosystem isn’t available yet, Jane said, “but in the next couple of years, I expect the cloud to be mature enough to handle a lot of our services.”
Dig Deeper on Data center capacity planning