New survey results from Blade.org indicate that businesses of all sizes have flocked to blades for virtualization. But when it comes to blades, IT pros also have plenty to complain about.
Along with benefits such as good performance, small size and simplified management, blades pose a threat of vendor lock-in and high power and cooling requirements of a fully loaded chassis. And if a user doesn't buy enough blades to fill a chassis, the purchase may not make economic sense, according to users.Does size matter?
Blades are noted for their compact size. They offer the same performance as 1U and 2U rack servers but take up much less data center space. But many data centers run out of power faster than space, especially companies consolidating servers using virtualization, according to some IT pros.
"Locally, the thing we have the most of is space. Saving rack [units] has no value here," said an IT administrator at a large New York-based university.
Cost-wise, users can benefit from blades if they buy enough to fill a chassis. If not, the cost of buying the chassis plus required components isn't justifiable, according to end users.
"If you want to purchase, say, 100 to 300 servers, then comparing those costs between blades and standard chassis isn't so bad. But what if you only buy [a few] servers at a time or per year?" the university administrator said. "The first purchase has to pre-pay for the infrastructure, and if you take two or three years to fill out that blade chassis then you must consider the possibility that the chassis becomes obsolete before it is full."
A network administrator with a Los Angeles--based technical and management support services company which operates in a small IT shop with fewer than 100 servers said he hasn't even considered blade servers for those reasons. "They seem to me to be for larger data centers, 100 or more servers," he said.
Those who invest in blades sometimes find that a fully loaded blade server chassis throws more heat and requires more power in a small footprint than their data center can handle.
"In many colocation facilities you will probably have trouble filling a single rack with [blades], as they are simply not set up for that much power draw and heat in one place," a Zurich-based Unix team leader stated on an IT community blade forum. "We have this problem, so each of our [IBM] BladeCenters essentially gets a whole rack to themselves."
So instead of spreading the cost of the blade enclosure and necessary components over, say, 14 blades, the cost of the system is spread over only seven or eight blade servers. This is a problem when it comes to cost justification.
As another IT administrator wrote on the IT community forum, "For us, we don't see any cost savings for blades if we don't fully populate the cabinet. The cost of the cabinet plus the SAN [storage area network] and 10 GbE, plus licenses, only make sense when spread over many blades. If there are just four or six blades in each cabinet, it's far cheaper to buy HBAs [host bus adapters] and NICs [network interface cards] for four or six 2U servers."Blade limitations, proprietary issues frustrate users
In addition, users say today's blades don't have enough dual-inline memory module (DIMM) slots, especially if they are used as virtualization platforms. Many blade servers have between four and eight DIMM sockets.
One Nashville-based IT administrator said he abandoned IBM's blades for that reason. "We moved from IBM BladeCenters to [IBM's] 3850 S [4U servers] because we could not put enough memory in the blades to make it worthwhile. We hit the memory limits of the blades long before we hit the CPU limits," he said.
Blade vendors have begun to catch on to this issue and added additional DIMM sockets to their servers. HP's new ProLiant G6 blades, for instance, offers up to 12 DIMM slots. Jeremy Sherwood, an engineer at the managed hosting and colocation facility Opus Interactive Inc., uses the G6 server with 12 DIMMs so that he doesn't run out of memory before running out of CPU power when virtualizing, which was a problem for him in the past, he said.
Probably the biggest complaint about blade servers is vendor lock-in; if IT buys an HP blade server chassis, it can't then throw a Dell or IBM blade server into that chassis. "No chassis is vendor agnostic," the New York-based IT administrator said. "I don't suggest that you constantly price every server purchase from several manufacturers, but I want the freedom to switch without penalty at any time."
In addition, the chassis components are not standard and can be tough to configure, one blade user said. "Everything is proprietary; mezzanine cards, storage blades, blade Ethernet switches…Don't even get me started on how expensive it is to connect fibre SANs to your blades."
The Server Systems Infrastructure (SSI) Forum now works with several blade vendors to standardize platforms, chassis, and components, but the largest blade providers -- HP, IBM, and Dell --haven't joined that effort, and probably won't.
When asked about standardizing blades, an HP spokesperson said, "It is currently not practical or desirable to even think about standardizing blades because they are a rapidly evolving technology, where vendors can add significant value as they innovate in form and function. Any attempt to standardize the blades, their enclosures or any other components would in effect severely limit innovation in return for minimal benefits for customers."
Let us know what you think about the story; email Bridget Botelho, News Writer.
And check out our data center blogs: Server Farming, Mainframe Propellerhead and Data Center Facilities Pro.