Virtualization has revolutionized the modern data center by allowing multiple workloads to run simultaneously on...
each physical server -- maximizing the potential utilization of each system in the enterprise. However, to get the very best consolidation from a virtualization program, the computing resources allocated to each VM must be tailored to the demands of each workload. Let's consider a few of the issues involved with VM resources.
Q. What's the problem with resource allocation? Don't VMs receive resources automatically when the machine is created?
Workloads are not created equal. Each application demands a unique amount of CPU cycles (computing power), memory space, network I/O and storage.
While a hypervisor will typically allocate these resources automatically when a virtual machine (VM) starts up, the allocation process is rarely tailored for the individual workload. When resources are underallocated, the workload will run poorly -- if it runs at all. But in most cases, the hypervisor or the IT administrator will simply overallocate resources to the VM.
Overallocating resources doesn't harm the VM, but it can waste the very computing resources that you're trying to optimize. After all, any resources that are assigned to a VM -- but are not needed -- will simply go unused because no other VM can use those unneeded resources once they're allocated.
The ultimate goal of workload consolidation is to "right size" the computing resources allocated to each individual VM so that each VM has enough resources to handle peak resource demands while still sustaining an acceptable level of performance.
Q. How can I determine the "right" amount of resources for a VM?
Allocating the "right" amount of resources for every virtual machine in the environment demands a bit of investigation using performance-monitoring tools, such as Dell's vOPS or vFoglight. Ideally, you would measure the performance and resources the workload uses in a non-virtual environment, and then compare the performance measured from the virtualized workload. By tracing the workload's performance over time, you can determine peak utilization or resource demands.
Storage is a critical and expensive resource that is consumed with each snapshot and multiplied with every backup or disaster recovery effort.
If the VM's performance is below expectations, you can then adjust any resources that may be lacking. For example, it is easy to add processor cycles or memory space if needed to improve the workload's performance.
And even more interestingly, monitoring tools can also help you determine when a workload is not making full use of allocated resources. For example, you may discover that a certain VM only uses up to 80% of its allocated CPU cycles -- even during peak demand. When you find unused resources, hypervisor tools can allow IT administrators to reduce the excess resources, freeing the unneeded resource for other workloads on the system.
Also remember that resource allocation is not a one-time effort. Continuous monitoring and reporting will allow IT staff to find trends in resource utilization that may require periodic resource changes or migration of workloads between servers -- workload balancing. This is basic capacity planning and is also part of many workload monitoring and management tools.
Some hypervisor platforms may be able to handle resource changes dynamically, but other platforms may require you to reboot the VM after each change. It's important to work with your tools in a lab or evaluation environment to understand how resource changes affect the workloads. This will prevent undue disruptions to the workload in an actual production setting.
Q. How important is storage as a VM resource?
Since VM performance is usually focused on CPU and memory issues, storage is often treated as an afterthought in VM management. In reality, storage is a critical and expensive resource that is consumed with each snapshot and multiplied with every backup or disaster recovery effort.
When planning for VM storage, it's important to consider the frequency of snapshots and the amount of time that each snapshot must be kept. For example, a VM that is protected with frequent snapshots that are kept for prolonged periods will be much greater than those for a VM that requires only a few periodic snapshots.
There is no need to protect each VM the same way, so be sure that the data protection scheme is tailored for each VM. This may require you to reduce the snapshot frequency and retention times for low-priority VMs or even protect important VMs more aggressively.
In addition, take advantage of storage features like tiering and deduplication to reduce storage demands. For example, the snapshots for a low-priority VM may go directly to a SATA storage array, while mission-critical snapshots may use Fibre Channel disk for top recovery performance.
Virtual environments can vastly improve resource utilization, but optimizing that utilization will require careful management of VM resources using capable tools that can help administrators watch demands, spot trends over time and allocate new resources with minimal disruption to each workload.