Conventional wisdom about how to configure servers for virtualization has shifted, as users assign more and more memory to virtual machines. In 2010 the average virtual machine was provisioned with more than double the amount of RAM than it was just a year prior.
Results of TechTarget’s "Virtualization Decisions 2010 Purchasing Intentions Survey," which surveyed more than 800 IT managers worldwide, reflect the evolution of server hardware and operating systems as well as virtualization’s emergence as a mainstream IT discipline.
In a virtual environment, total system memory should be assigned to individual virtual machines (VMs) in accordance with their memory usage. Underprovisioning memory to a VM can hinder performance, forcing a guest VM to resort to paging – that is, storing and retrieving data from secondary (or slower) storage. As such, hypervisor vendors recommend assigning VMs at least as much physical memory as their guest memory usage. Some hypervisor vendors, such as VMware and Microsoft, can reclaim a certain amount of unused guest memory, minimizing waste.
Two, four, six, eight, memory appreciates
In this year’s survey, the most common range for RAM per virtual machine (VM) was 1.5 GB to 2 GB, which was selected by the plurality of respondents (27.6%). In 2009 the plurality reported assigning far less RAM to VMs -- 500 MB to 1 GB of RAM (22.1%).
The biggest variable [in virtual machine memory allocation] is Windows Server 2008.
Rick Vanover, IT infrastructure manager, Alliance Data
Virtualization architects haven’t stopped there, though. Nearly 42% of respondents report assigning more than 2GB of RAM per VM, compared with only 32% in 2009. Conversely, fewer IT shops assign less than 1 GB of RAM to a VM: Nearly 14% assigned under 1 GB per virtual machine in 2010, which was a decrease from 27% in 2009.
Part of the reason for increased VM memory allocation is the advent of 64-bit operating systems, said Rick Vanover, IT infrastructure manager at Alliance Data. “The biggest variable is Windows Server 2008,” he said. Whereas the 32-bit Windows Server 2003 could see only 4 GB of memory, the 64-bit Windows 2008 can see much more. Vanover said he routinely sees documentation for Windows Server 2008 applications with memory provisioning requirements of 8 GB or more.
And some applications require even more memory. Java application servers are one example, said Kent Altena, technical engineer at FBL Insurance Brokerage Inc. in Des Moines, Iowa. “Java being such a memory hog, we give our Sun GlassFish servers between 20 GB and 24 GB of memory,” he said. “If you have a couple of those in your environment, it’s going to substantially jack up your average memory assignment.”
Virtualization administrators aren’t afraid to tweak memory provisioning to suit their individual needs and preferences.
“It really specifically depends on what the VM is going to be doing,” said Bill Bradford, a systems administrator at a Houston-based energy services firm that runs VMware on 12 physical hosts. For example, a VM slated to run an Oracle database may need something on the order of 4 GB of RAM, whereas a simple Web server may only require 1 GB, he said.
It also depends on the OS. “A Windows VM will need more RAM for an equivalent task than a stripped-down Linux install or something like NetBSD/OpenBSD,” Bradford said.
All hands on deck
At the same time, advances in server and processor design have made it possible to get the most out of the RAM that gets assigned to a virtual machine. Memory management technologies such as AMD’s Rapid Virtualization Indexing (RVI) and Intel’s Extended Page Tables (EPT) offload memory management functions from a hypervisor and enable near-native performance, said Chris Wolf, a research vice president at Gartner Inc.
“The availability of RVI and EPT removed a significant bottleneck that had existed for years for memory access,” he said. Before those technologies were available [with VMware ESX 3.5 for AMD’s RVI and vSphere 4.0 for Intel EPT], “you’d sometimes see horrible performance results for multi-threaded apps,” Wolf said. “People would blame I/O but in a lot of cases it was memory bottlenecks.”
Meanwhile, the latest and greatest chips can accommodate more memory, for less, thanks to processors’ increased number of supported dual-inline memory module (DIMM) slots, combined with the ongoing drop in memory prices. For example, a single AMD Opteron 6100 processor can take up to 12 DIMMs, or up to 48 DIMMs per system.
This confluence of need and availability has led virtualization architects to purchase servers with much bigger memory footprints than they did previously.
Alliance Data’s Vanover, for one, prefers to purchase servers with only half of the DIMM slots used and save the rest for “small scale-up.” For example, he has leaned toward buying two-socket DL380s with 128 GB of RAM, in the form of eight 16 GB DIMMs. The 16 GB DIMMs are more expensive than eight or even four GB models, “but that way I don’t use up all my DIMM slots,” he said, adding “I expect them to get maxed out.”
Server vendors such as IBM and Cisco Systems Inc. have pushed the amount of memory they can pack into a system even further, with proprietary ASICs that enable chips to recognize more memory than their specs call for. Select blade models of Cisco Unified Computing System, or UCS, come with an Extended Memory feature that allow up to 384 GB of RAM on a single two-socket system. Likewise, the IBM System x3850 X5 and x3950 X offer optional MAX5 technology that offers up to 96 DIMMs per four-socket server.
“It’s almost been a perfect storm,” said Gartner’s Wolf. “RVI, EPT, better hypervisors, and server form factors with more memory slots.”