Essential Guide

Cut data center sprawl to improve IT capacity

A comprehensive collection of articles, videos and more, hand-picked by our editors

How many VMs per host is too many?

The ultimate host server welcomes hundreds of virtual machines. But that doesn't mean consolidation should outrank performance.

Can your server handle the number of virtual machines it's hosting?

Hosting virtual machines (VMs) en masse is increasingly easier as hardware underpinning server virtualization improves. But with memory, CPU and scheduling limitations, how much is too much?

We asked three IT pros how many VMs per host they've seen, and how well it worked. While you can conceivably cram more than 500 VMs on one server host, sometimes less is more. Risk, utilization rates and memory factor into the decision.

Virtualization doesn't just consolidate as many servers as possible -- it has to actually do something. You could share one core between a VM with 1 MB of memory and two other VMs, but there's no point because the VM performance would suffer. More than three VMs per core causes scheduling overhead, among other issues. This doesn't mean paltry consolidation numbers, however. A high-end server using a 15-core Intel Xeon E7 processor yields 60 available cores. Ideally, it could host 180 VMs. With enough memory and I/O for the workload, this is realistic, as one IT pro, formerly with GitHub, pointed out.

A lesson in utilization

Ian Kaufman, research systems administrator at UC San Diego, Jacobs School of Engineering: We have an extensive VM infrastructure on eight hosts with 256 GB of RAM [random access memory] and fast CPUs. We also have NFS [Network File System] storage on a NetApp array with 10 Gbps connectivity and 256 GB of flash cache. With VMware ESXi 5.X, we run a maximum of 24 VMs on each node, usually working with about 15 VMs per host. We see a scant 3% to 5% CPU utilization, and 7% to 11% RAM utilization.

We could comfortably put 48 VMs or more on a single server host and barely see a dent in capability, but lower utilization rates allow us to spring into new projects easily. We also distribute VMs to facilitate automatic failover in the event that a host goes down. We can patch and update hosts without bringing any VMs down.

The VMs are Web servers for the most part, though we do have some interactive login machines (both Windows and Linux), as well as some MySQL databases. Nothing is too compute-intensive, versus a number-crunching application's VMs, for example.

During a hardware upgrade, we were able to put all 125 VMs on two nodes while we migrated to the new equipment. Even splitting up the entire inventory on two nodes, we still would have barely taxed our VM infrastructure.

Is the risk worth the reward?

Tips on virtualization management

Five steps to better VMs

Avoid VM sprawl

The right provisioning path

Brad Maltz, office of the CTO of Lumenate, a technical consulting firm: In a virtual desktop infrastructure [VDI], I have seen about 150 to 200 VMs on one server. The majority of large VMs lived on quad-socket rackmount servers. With VDI, there are different consolidation ratios than for other workloads.

I have also seen around 80 VMs on one server, because that company wanted deep consolidation; it was a management issue. Putting this many VMs on a server host worked well, but the real question is risk: Does a business want to risk putting 80 VMs on one host? What happens if the server goes down? Can you afford to lose 80 VMs at once?

Leave room for the future

Adam Fowler, IT operations manager at Piper Alderman: The most VMs I have seen on a single host is 31, virtualized with Microsoft Hyper-V on Cisco UCS blades that offer 256 GB RAM and two eight-core Intel E5-2665 CPUs. Storage is a fiber-connected EMC SAN [storage area network] with SSD [solid-state drive] caching.

Since we are nowhere near the ceiling on RAM, CPU or I/O utilization, 31 is not the limit on these server hosts; however, we have flexibility to do what we want later. Some servers are high usage, while others are low. We can move workloads around on our six UCS hosts when patching or rebooting.

This was last published in June 2014

PRO+

Content

Find more PRO+ content and other member only offers, here.

Essential Guide

Cut data center sprawl to improve IT capacity

Join the conversation

8 comments

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

How many VMs per host is the most you have ever seen on a server host? How did it work?
Cancel
65O - LIMITED performance
Cancel
We have one node in one of our VMware clusters that currently has 75 Windows 2008 R2 servers running on it. The other 3 nodes in that cluster average about 40 each. Of course, these are HP DL980's with 160 logical processors and 2TB of RAM each. None of the 4 nodes are over 25% resource utilization on CPU or RAM.
Cancel
On one of our VMware clusters one of the nodes is currently hosting 75 VM's, the majority of which are Windows 2008 servers. The other 3 nodes in the cluster each host about 40VMs. Of course, these are HP DL980 servers with 160 logical CPUs and 2TB of RAM each. None of the 4 nodes are above 25% utilization for RAM or processor. The only bottleneck we have seen is actual SAN performance of lower-tier storage, but our 3Par and EVA FC SANs perform admirably.
Cancel
Have you guys thought of server consolidation and energy efficient workload scheduling ? The underutilized servers consume over 50% of the peak energy. May be the VMs can be profiled for their daily workload variation and then server consolidation can be performed. This will help in reducing the computing energy consumption as well as the cooling energy efficiency. Hope this helps.
Cancel
Please ask your expert to be more clear on their comments, the whole article uses VM per core then your experts only mention VM per host! (pear and apples comparison!)
Cancel
I'm surprised at these numbers. I think the bottleneck is not CPU/RAM but rather Disk. On 10Gbps iSCSI connection, you can only produce maximum 1GB/sec performance

Physical Server with SATA disk offers 150MB/sec. If we want our VMs to perform as well as a dedicated server, then on 10Gbps iSCSI, one can only have 10 VMs burst at maximum speed.

Of course, this is assuming your storage appliance is using SSD disks with a fast RAID.

Based on this information and 25MBps transfer rate for each VM, the maximum you can have on a single iSCSI 10Gbps connection is 50 VM.
Cancel
@tamouh
Nobody is going to attach a 75-VM host to a SAN using only a single connection.
Cancel

-ADS BY GOOGLE

SearchWindowsServer

SearchEnterpriseLinux

SearchServerVirtualization

SearchCloudComputing

Close