Feature

The server side of application virtualization

Thanks to the growing use of endpoint virtualization and application virtualization, IT managers must recognize that their server needs could change significantly depending on which virtualization technology they choose to deliver an application.

In nonvirtualized application delivery, a single application is installed on a single physical server, and end users access the application over the LAN. Since that delivery model uses only about 5% to 10% of the average server’s computing resources, there was little concern about the server’s overall configuration.

As long as the server met the application’s computing requirements, administrators rarely worried about application delivery, unless a specific performance problem arose–typically problems had more to do with troubleshooting than the server’s configuration. For example, an excessive number of users can reduce network performance, and administrators would respond by bonding multiple network adapters.

The use of endpoint virtualization has radically altered this approach. Virtualization increases server usage, allowing the physical box to handle many more simultaneous tasks. But administrators are faced with the challenges of planning. For an application delivery server, there must be enough computing resources to support all of the planned end users or endpoints, and continuous management is required to prevent resource exhaustion and maintain performance. In addition, the virtualized server must provide a level of resilience that will minimize downtime and disruption to users. Virtualization demands a much more difficult balancing act for IT professionals.

Consider application virtualization servers
Application virtualization servers allow users to access a unique instance of a specific application that is actually installed on the server. But application virtualization is highly sensitive to I/O performance. For example, the choice of storage location can be a huge influence on application virtualization.

“Any time you deploy to a [storage area network (SAN)], the I/O is the first issue you think about,” said Ian Parker, senior Web services administrator for Thomson Reuters, the global information resource company. “A lot of us are looking very closely at flash drives these days.”

Disk I/O performance is important at the storage array, but I/O issues can also translate to the network. For example, an Ethernet-based SAN such as iSCSI or Fibre Channel over Ethernet may cause storage bottlenecks at the LAN, so putting storage on a local disk within individual application servers can potentially ease network congestion. Network I/O performance can also present problems with bandwidth-intensive application streaming when delivering applications  to the endpoint on demand, rather than being run entirely from the central server.

Parker said RAM is rarely a significant issue with application virtualization, because modern servers with 64-bit operating systems can easily support hundreds of gigabytes–or even terabytes–of RAM.

Consider desktop instance (VDI-type) servers
Virtual desktop interface (VDI) servers host entire desktop instances on a central server, exchanging only user input and audio/video output with the user that is working on a simple endpoint device, such as a “thin client” or “zero client.” Servers hosting entire desktop instances are far more resource-intensive than application virtualization, so the concerns shift to providing adequate CPU, memory and storage I/O. There is less emphasis on network I/O once the desktop instance is loaded and running.

Local storage can be beneficial for VDI performance, but SANs are the more popular storage platform, because they offer single points of management.

“If you’re going to locate [VDI] on a SAN, bandwidth becomes really important,” Parker said. “Then it’s all about RAM and disk throughput, because you’ll have a certain amount of paging and other activities.”

Disk subsystems must also support increasing performance demands as VDI instances proliferate on the delivery server. For example, Parker finds that disk writes are the dominant storage activity after a VDI instance boots, so details like write penalties for RAID 5 can actually reduce performance of the storage subsystem.

Individual desktop instances can also benefit from a larger number of CPU cores on the server. Therefore, selecting server hardware that supports CPUs with more cores can increase the server’s VDI hosting capability.

“Choose larger servers to do that, favoring more cores rather than fewer [or faster] cores,” said Bob Plankers, technology consultant and blogger for The Lone Sysadmin.

Consider the application’s unique resource demands
The application being virtualized may have some influence on the server’s configuration. For example, a medical imaging application designed to handle huge files may have significant memory and storage I/O requirements. And those requirements may be multiplied when the application is virtualized and delivered to users. Tools such as Liquidware Labs Inc.’s Stratusphere can help administrators to determine the actual application resource needs before moving forward with a VDI deployment.

Also, consider the growth of other visualization applications, such as computer-aided design graphics and rendering tools. Emerging visual technologies, such as RemoteFX in Windows Server 2008 R2 SP1 and HDX in Citrix Systems Inc.’s XenDesktop 4, allow the use of powerful graphics cards in terminal servers. It’s another step forward in application delivery, but IT administrators need to weigh the implications of such technological advances on their server infrastructure. “What kind of a blade server can actually accept a video card of any significance given the form factor [PCIe], space on the rack, and so on,” Parker said.

Still, for the majority of current business applications that don’t have niche requirements, the virtualized application generally has little (if any) direct effect on the server. “It’s something that we pay attention to, but it doesn’t change how we would buy the server,” Parker said.

Consider clustering and resiliency in application delivery
Servers that deliver mission-critical applications to enterprise users typically include a combination of resiliency techniques to ensure availability. The server itself will provide resiliency features, such as onboard RAID controllers for local disk storage and redundant power supplies. Traditional server clustering, migration tools or more recent developments in redundant virtual machines can protect workloads from unexpected downtime. It’s an even more important consideration with application virtualization, as there are potentially many more users that can be disrupted by server downtime.


This was first published in July 2011

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: