VDI offers centralized management benefits for IT and flexibility for end users, but data center managers need...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
to prepare for network infrastructure burdens associated with server-hosted virtual desktops.
“After disk, the most important consideration in a [virtual desktop infrastructure] deployment is bandwidth,” said John Drew, an IT director for Greenpages Technology Solutions, a national consulting firm in Kittery, Maine. “If you don’t have enough, it’s going to be very slow, especially if you’re going to do it from a colo – then you have to think about bandwidth, latency and get into routing protocols.”
This proved true for Wellesley College, which recently realized that a move from physical desktops to virtual desktops taxes network infrastructure and requires network upgrades.
About 150 desktops at the college’s computer lab in Wellesley, Mass., were virtualized using VMware View for virtual desktop infrastructure (VDI) and some of the college’s applications were virtualized using VMware ThinApp. This gives Wellesley’s IT department a way to manage those desktops centrally and gives students a way to access the college’s applications outside of the computer lab.
The first hurdle in moving to a virtual desktop environment was collaboration between desktop and data center teams. Over the next couple of years, a restructuring of the data center network will follow, according to systems and networks manager Leonor Martins.
VDI network challenges, resolutions
The network currently runs on a Gigabit Ethernet backbone, but more bandwidth, probably 10GbE, will be necessary to support VDI.
“We definitely noticed right away with VDI that connecting to YouTube or other videos wasn’t the same as sitting at a local desktop,” Martins said.
It’s not known how much the cost will be to do the 10 GbE upgrade, but a network overhaul is being planned to boost the performance of other projects, such as virtualizing Tier 1 applications, and to improve VDI video connections.
Providing the proper network bandwidth for VDI was also a challenge for Greenpages Technology.
The company recently consolidated about 125 physical desktops into a VMware View environment hosted at a collocation facility in Boston. While testing VDI locally, the IT team limited the available connection bandwidth to desktops (1 Gb) to simulate the connectivity over distance (about 20 Mbps), according to Greenpages’ Drew.
“That gave us a chance to fine-tune and tweak the network, and when we moved to the [remote] data center from the local one, users didn’t notice,” Drew said.
To deliver that performance from the remote data center, the firm upgraded its bandwidth from five bonded T1 lines in Kittery to a 20 Mbps DS3 pipe.
In addition, Greenpages’ IT team uses locked-down 32-bit Windows 7 images and Linked Clones to maintain the desktops, easing the storage impact on the back-end infrastructure, Drew said.
New twists on storage networking
Increased storage I/O demands from virtual desktops is a well-known issue in the data center world, but IT may also look to virtualized networking between servers and storage to help simplify storage area network (SAN) zoning in VDI environments.
Emory Healthcare in Atlanta is piloting a VDI project with plans to deploy 25,000 virtual desktops with 4,000 to 5,000 concurrent users by the end of 2012, according to senior system specialist Bill Akins.
The company chose Citrix Systems Inc.’s’ XenDesktop, running on XenServer and Fujitsu’s Primergy BX900 blade servers. So far, the company has spun up 68 XenServer instances to support up to 25,000 virtual desktops — 4,000 to 5,000 concurrent.
The blades connect to an EMC VNX storage array using a Cisco Systems Inc. Fibre Channel SAN director switch, but also run Egenera Inc.’s PAN Manager software to abstract the networking layer between the servers and storage, so applications can move to any server within the blade system without having to re-zone the SAN.
“Disk can be allocated to a frame and then it can be allocated to a server, instead of having to zone all the way to an end node,” Akins said.
Since VDI tends to require these types of data center network infrastructure upgrades, any savings from VDI projects are realized operationally, rather than through any reduction in capital expenditures, said Brent Ouellette, VP operational initiatives for Envision Technology Advisors LLC, a virtualization consultancy based in Providence, R.I.
Beth Pariseau is a senior news writer for SearchServerVirtualization.com and SearchDataCenter.com. Write to her at firstname.lastname@example.org.