While many IT pros use virtualization to avert server sprawl and keep power costs low, others lack the resources...
to or resist going virtual because of performance overhead, however low it may be.
One such company is London-based Last.fm Ltd., a large and fast-growing social networking and free music-sharing website. The company uses open source OpenVZ virtualization in its testing and development environment but has said no way to production-level virtualization.
"Virtualization helps from a manageability perspective, but we've been running diskless servers for years that boot off a central image, which makes it easier to manage groups of machines doing the same task without virtualization," said Richard Jones, CTO and co-founder of Last.fm. [Plus], "virtualization wasn't as accessible or efficient when we started doing this."Instead, the company Net-boots; its Web server, load balancer and Hadoop boxes. "That means we have over 200 machines all booting off a pair of machines hosting three custom Linux distro [distribution] images," Jones said. "Doesn't get much easier than that." If it ain't broke, don't fix it
In addition to being satisfied with current operations, the performance latency of virtualization -- however low -- is intolerable; many of Last.fm's applications are CPU-heavy and Jones doesn't want to introduce any performance overhead. The performance overhead with virtualization is low, though. When using hardware with virtualization-assist technology, such as AMD's Opteron with Rapid Virtualization Indexing, performance overhead is often less than 5%, according to a VMware engineer.
Given that this percentage comes from a virtualization vendor, it may be a little rosier than reality. "But even if it is a 90% performance comparison, all the benefits of virtualization make it worthwhile," said Andi Mann, an analyst with Enterprise Management Associates (EMA). "Things like DR [disaster recovery] capabilities, higher server utilization, workload migration capabilities, increased flexibility and agility and being able to reuse physical resources."
Which is why most of the IT pros with whom Mann speaks use virtualization at some level, and more people have moved virtualization into production now that performance is getting stronger -- but not everyone is sold on it.
According to Forrester Research, 54% of enterprises and 53% of small and medium-sized businesses (SMBs) have implemented x86 server virtualization or will do so within the next 12 months; and while that is a lot, it still leaves 46% of enterprises and 47% of SMBs that haven't virtualized.
One main reason for deciding not to virtualize is a lack of resources to implement a new technology, especially during a recession when many IT departments have shrunk, Mann said.
In fact, 40% of enterprises say they lack the time and resources to move forward with virtualization projects, and 30% cite a lack of technical skills, according to a 2008 EMA survey, called "Virtualization and Management: Trends, Forecasts, and Recommendations."
What's more, only 31% of companies that have deployed virtualization say they are confident they have the skills and resources to manage it, the EMA report shows.The other big reason not to virtualize is the "if it ain't broke, don't fix it" mentality; data centers with good server utilization rates, for instance, don't see a need for it, said Illuminata Inc. analyst Gordon Haff.
"Server consolidation is the reason a lot of companies adopt virtualization, so if you don't have a utilization issue, that certainly eliminates the major reason to adopt it," said Haff. "If you have HPC [high-performance computing] or other types of Web 2.0 and grid environments where you are running applications across a large number of similar systems, you see virtualization being used, but it certainly not the low-hanging fruit."
Haff said most of the folks in the Web 2.0 and HPC space who don't use virtualization software from major virtualization vendors like VMware typically use other forms of virtualization , such as Container-based virtualization or workload management software across a bunch of x86 servers, Haff said.Vendors sell data center power efficiency
IT pros resisting server virtualization in production environments or who don't have the resources to implement it still need to add compute power. For them, superefficient servers, CPUs and power supplies are a good way to add compute cycles while minimizing data center power costs.
For instance, Last.fm is adding about 30 million new users each month, and since the company doesn't use virtual machines (VMs) in production, it has to add new physical servers every few weeks to support that growth, Jones said.
Other than the obvious server sprawl and associated costs, an issue is the power requirement. As in the U.S., lack of power is a major concern in the U.K. Utilities there predict that by 2012, brown-outs or blackouts could occur, according to research by the DMW Group.
"Data centers built in London in the late 1990s weren't designed with enough power capacity, so we've ended up in a situation where we have plenty of space in the data center, but not enough power to run our machines," Jones said.
In fact, more than 25% of IT pros polled in SearchDataCenter.com's most recent Purchasing Intentions Survey said that power limitations hinder their ability to grow data centers -- the second highest hurdle, following space constraints (32%).
For example, Jones has been in the unfortunate position of having purchased servers without knowing their power consumption and thus ended up with systems he couldn't use.
When it comes to hardware, such snafus are why many IT pros place power efficiency above performance. In that same survey, 25% of 600 respondents said they planned to purchase new servers to increase power efficiency, and 48% said reducing power consumption in data centers is a high priority.
These power-efficiency demands have forced server and CPU vendors to devise high-performing systems that consume less power than ever, and new products are often marketed with high-efficiency claims.
Last.fm's Jones recently chose Sun Microsystems x64 servers to replace a couple of racks that contained less efficient 1U servers.
He chose Sun's blade servers over efficient options from companies like Hewlett-Packard and IBM because the company's director of engineering had good experiences with Sun hardware before, and the pricing was aggressive, Jones said. "The big attraction for us was the Startup Essentials program from Sun, which gave us support, advice and very aggressive prices," he said.
Last.fm now has two chassis of four-socket Sun x6450 blade servers running Intel's six-core processors. With two chassis in a rack, Last.fm installed a total of 20 blades running on a 32-amp supply. These blades, used as Web servers, also take up less and have more computing cores -- 240 cores per chassis with 480 cores in the rack, Jones said.
"Previously we were using 1U servers, which were dual-quad-core CPUs, and we could get 28 of them in a 32A supply (or one rack). So we went from 224 cores - 28 machines times eight cores - to 480 in the same space and power," Jones said.
To add efficiency, Jones opted for compact, 16 GB flash drives for load-balancing servers, instead of spinning hard disks, because the flash SSD is more efficient.
Last.fm now has about 400 physical servers across its three hosted data center sites and uses somewhere in the range of 500 amps of power, Jones estimated. So, virtualization be damned. By using power efficient systems, Last.fm now has excess power capacity to add more compute power.
Let us know what you think about the story; email Bridget Botelho, News Writer.