Simple tactics can optimize memory paging or eliminate it entirely to improve virtual machine performance.
A virtual machine (VM) file resides within the server's physical memory space. As consolidation crowds more VMs on the same physical host, memory plays a heightened role in hardware and overall server performance, especially when it comes to
Paging uses hard drive space to supplement memory on a computer, copying unused sections (pages) of memory onto a disk file (the page file or swap file). The OS can swap pages between the disk file and memory as needed.
Memory paging ensures that a system will never run out of memory and crash, but reliance on local hard drives can impair performance. Memory operates much faster than disk reads and writes, so anytime a page is swapped, the system waits for the disk to catch up.
Simple tactics to improve paging
Don't implement paging for performance-sensitive applications and avoid making multiple VMs vie for chronically limited memory on the same system.
Use more than one spindle. Place the page file on a local physical disk separate from the OS or workloads. This reduces contention for the same physical disk. If you need to work with multiple paging files, use multiple disks or a disk array to avoid putting multiple page files on the same physical disk. Organizations that use storage virtualization must verify the physical disks that are associated with each logical partition. There may be plenty of space on the D:, E: and F: partitions, but those could all be on the same physical hard drive.
Avoid fault tolerance disk storage except for high-availability applications. Writing page files to a redundant array of inexpensive disks often slightly lowers performance because data may need to be duplicated or written to multiple locations. Place the page file on a disk that is not protected by fault tolerance. While performance benefits, a disk failure could cause a system crash, with the page file inaccessible.
Get sizing right, ease memory paging
You can eliminate memory paging entirely if the system offers ample physical memory to support peak workloads.
Before virtualization, server administrators simply tallied the OS and application memory demands and chose a server that exceeded them. For virtual machines, administrators size memory for each VM using the same basic approach. By tracking memory usage on a physical machine using basic performance tools such as Windows performance counters, they can determine average and peak memory use, then allocate memory to VMs accordingly. Automated tools such as Dynamic Memory, included with Windows Server 2008 R2, help the OS optimize the memory allocated to each VM.
Memory use is not static, so continue monitoring performance. Windows Server 2008 R2 and other platforms provide several performance counters that track memory performance. For example, the Memory-Pages Input/Sec counter should average less than 10 over a one-hour period. A higher average signals higher-than-expected paging, damaging performance.
Also consider tracking Memory-Standby Cache Reserve Bytes and Free & Zero Page List Bytes counters. When you add them together, the sum should exceed 200 MB when the machine has 1 GB of memory, and reach about 300 MB when the machine has 2 GB of memory. If the total is smaller than expected, there may be inadequate free memory to accommodate peak or unexpected demands.
With VMs, try workload balancing -- moving one or more workloads to other servers with more available resources -- before adding more physical memory to the system.
When paging is low and ample free memory is available, administrators can disable paging by turning the swap file off through the OS. This rarely happens, however, because a memory shortage with paging disabled will likely cause a system crash. IT professionals usually prefer to leave paging enabled, albeit unused.
This was first published in October 2013