It seems to be assumed that running eight OSes on a single large server with a virtualization suite is cheaper than running eight separate physical servers. Leaving aside for the moment the cost of virtualization software, training admins to use the software and so forth, let's look specifically at the cost of the hardware.
Eight 1U dual-core servers with 4 GB RAM, one or two built-in Ethernet ports and 70 to 150 GB of hard disk space can be found quite inexpensively these days, with even more savings if you don't insist on a first-tier server manufacturer. On the other hand, a 16-core server with 32 GB of RAM, several four-port Ethernet cards and a terabyte or two of disk space can cost considerably more than eight dual-core servers. A quick check of a major brand site produced numbers of $1,339 each for the small servers, or $10,172 total, versus $45,196 for a single big server. If the servers are running mission-critical apps and you want failover capacity, you'll need two big servers for $90,392, rather than nine small ones for $11,511.
If you do need failover, you'll need premium virtualization software in addition to the failover software, which is a considerable expense. With hardware and software factored in, eight virtual servers could be $30,000 to $50,000 more expensive. On the other side of the balance sheet, a virtualized infrastructure can be more flexible and easier to administer, since all eight virtual servers, backups, and failover can be managed from a single console.
However, the management savings is not necessarily a lock. If you set up eight single-purpose servers (an app server, mail server, Web server, SQL server, etc.) without changing the configurations, virtualization's advantage drops. The initial setup of eight virtual systems versus eight physical systems might be easier, but each server needs to be configured and maintained, and this isn't any easier on one big box than on eight little ones. In a development or testing environment, where you might need a different group of servers every week, there's a huge advantage to a virtualized environment, but this is a relatively small percentage of the overall number of servers out there.
Why do so many companies adopt virtualization if it costs more than separate physical servers and doesn't necessarily yield administrative cost savings? First, it's the "in" thing at the moment, just as outsourcing was a couple of years ago. Second, if you have a large number of servers that tend to be loaded at different times during the day, you can get away with more virtual servers on a single box. If you can get 16 virtual servers running on the big box, the cost differential drops a lot, and the savings in power is notable as well. Unfortunately, several recent surveys have shown that people tend to overestimate the number of virtual servers they can run on a single system, so servers end up underutilized.
This doesn't mean that you shouldn't consider virtualization, but you should look carefully at the numbers and run a test bed before committing to the idea.
ABOUT THE AUTHOR: Logan Harbaugh is a freelance reviewer, network systems analyst and consultant, specializing in reviews of network hardware and software, including network operating systems, clustering, load balancing, network-attached storage and storage area networks, traffic simulation, network management and server hardware.
What did you think of this feature? Write to SearchDataCenter.com's Matt Stansberry about your data center concerns at firstname.lastname@example.org.
This was first published in October 2009