Over the past 10 years, virtualization technologies have allowed IT engineers to consolidate, save money and help...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
their businesses grow. But just how much of an effect has virtualization had? Any engineer working with modern server technology would be foolish not to utilize the resources on these machines for business growth and expansion. By placing numerous virtual machines (VMs) on top a physical host, data centers are running more streamlined and have a smaller hardware footprint. A new Hewlett-Packard Co. ProLiant Server with an Intel Corp. eight-core processor is capable of running five to six VMs without much trouble. So why hold on to dying hardware when it can be virtualized, mothballed and recycled?
As virtualizing gets easier, network engineers are becoming increasingly comfortable with the technology. However, there is a downside to this ease of use. Lately, many companies have begun to buy a server, load the workloads on a storage area network (SAN), deploy the hypervisor and load the VMs. Although this approach may work, IT administrators are beginning to skip important steps that can hurt virtual server performance.
Understanding virtualization technology
Generally speaking, there are two types of virtualization technologies: hosted and bare metal. A hosted environment uses a server with a preloaded operating system (OS) on it, such as Windows Server 2008. Once that OS is installed, an administrator would load their virtualization software on top of the original OS. Bare-metal architecture removes the OS from the equation and uses a Linux/Unix-based kernel as the hypervisor. Basically, the virtualization technology is installed right on top of the hardware. VMware Inc. offers both types of virtualization technologies, while Citrix Systems’ XenServer works around a bare-metal hypervisor.
The biggest advantage of using a bare-metal design is that there is no extra software that VMs have to go through to access base hardware resources. However, a bare-metal deployment requires newer hardware, as the onboard processors must be either Intel-VT or AMD-V ready. This means that administrators looking to upgrade older servers will not be able to use this technology. Still, there is always the hosted hypervisor at their disposal.
Proper resource allocation is vital to the performance of both the physical host and the VMs. Prior to deployment, it’s imperative to understand what the hardware will be used for. Is the environment running high-end SQL queries non-stop? Or is the company planning on hosting simple applications for a few users to access on a non-regular basis? By understand the goal, an administrator can roll out an environment capable of handling the load, and more importantly, capable of growing with the infrastructure.
There are three core upgrade paths when it comes to physical hardware:
1. Hard drive – There is very little debate that by upgrading a machine’s hard disk with a faster drive, an environment will see an increase in VM performance. If a given environment does not use a SAN as a central point for workloads, consider upgrading with multiple faster disks. Smaller IT shops that do not require the use of a centralized storage array work with onboard space requirements built into their physical hardware. That is, by upgrading the RAID array with better and higher performance drives, the environment will see a boost in performance as well as redundancy.
For larger deployments where a SAN is present, consider the technology on hand. Is the SAN older? Do the drives on it spin at a fast enough speed to allow quick and seamless access to a workload? Often, IT engineers will skip on replacing a SAN and wonder why their virtual infrastructure is running slowly, despite having a new server and new virtualization software. Although this can be a bit of an investment, using older SAN technology can be detrimental to a new virtual environment.
2. CPU – As VMs are loaded onto a physical box, the onboard processor becomes more utilized. From an IT engineer‘s perspective, a faster CPU will always provide faster processing. Physical hosts have the capacity to have their CPUs upgraded or even added to. Many machines come with open CPU slots that are capable of expanding.
3. RAM – Probably the most cost effective way of tweaking a virtual host’s performance is by upgrading RAM. By upgrading memory on a host, you will be able to allocate more RAM to each VM. Any server-class machine is capable of handling more RAM than it is usually shipped with. When new RAM is added, an engineer can now re-examine how the VMs are using resources. From there, additional memory can be allocated to a given machine to improve performance.
VM resource utilization
A VM requires resources delivered from the underlying hardware to function. The more resources given to it, the better and faster it will run. This is where appropriate VM sizing is absolutely crucial. Under or over-allocating resources to the VM can be detrimental to the performance of not only that VM, but all others running on that host.
“Verifying the appropriate resource to give to a VM is an important ongoing process,” said Tim O’Brien, systems consultant at MTM Technologies Inc. “When working with live machines, an engineer should always monitor how much utilization is happening on that VM. Take the time to dedicate enough resources to a workload and remove resources when they are not being use.”
When an environment is supporting multiple physical hosts with numerous VMs in the infrastructure, resource management becomes even more important. Many environments now have built-in failover capabilities where other hosts are able to take virtual workloads away from failed physical servers. This is where an over-allocated VM can have a lot of trouble failing over to a device that doesn’t have the resources to share.
“Virtual technology has come a long way in how it deals with resource usage. I always recommend using dynamic memory allocation to more efficiently use resources and improve overall VM density,” O’Brien adds.
Best practices for VM performance
To really enhance the performance of a VM or physical host, try to always remember the following points:
- Use native software virtualization tools. Both XenServer and VMware come with paravirtualization technology. That means they will both have “Xen Tools” or “VMware Tools” to allow better VM usage of OS resources. For example, VMware will have its own network interface card driver to better communicate with the underlying hypervisor. These tools come with most virtualization platforms and should be installed as soon as the VM is deployed.
- For administrators running Windows OSes older than Windows Server 2008 or Vista, disk alignment may be a problem. Server 2003 and XP VMs will format their virtual disks and will be misaligned with the underlying storage by default. This will cause performance degradation in VMs. For details on how to fix disk alignment, read this article on Aligning disk partitions to boost virtual machine performance.
- One common tip is to make sure the antivirus software is configured correctly. By allowing real-time virus scanning to run on a VM, an engineer runs the risk of seeing a host OS experience a severe speed decrease. Optimizing your antivirus configuration will significantly improve VM performance.
- When deploying a new physical host, make sure that all firmware and BIOS updates are applied. Many times, little tweaks are deployed by manufacturers that have big effects on VMs running on that hardware.
- Never forget to run regular maintenance on the VM environment. Make sure to defragment the virtual disks as necessary and apply any needed updates from both the host OS and the virtualization software.
- Just like a regular OS, VMs function better without visual effects. By removing these little settings, a VM will operate faster.
Virtualization engineers overlook the important little details too often when deploying their new environments. Tasks such as constantly monitoring VM resources, disk health and simple preventative maintenance can easily be overlooked. By forgetting to do the little things, engineers can create larger headaches for themselves in the future. Always take the necessary time to ensure that both VM and physical host are healthy.
About the expert
Bill Kleyman is the director of technology for World Wide Fittings Inc., a global manufacturing firm with locations in China, the UK and across the United States. Bill has more than 10 years experience in the IT industry and has worked on a variety of breakthrough virtualization and security technology projects. Currently, he manages all IT operations for his company, utilizing various technologies to maintain a competitive edge in the manufacturing industry.