Gartner's research vice president John Enck is tired of the hype around blades and headlines about blades and server...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
virtualization being the perfect marriage in particular.
Like many IT managers, Enck's clients are often confused over whether to invest in blades or in virtualization or both. Enck said the answer could be neither.
Server virtualization pros and cons
Virtualization drivers are server consolidation, lower cost and quicker deployment of new server-based applications, freedom from vendor tie-in and the ability to address workload changes easily. At the same time, there are barriers to server virtualization.
Not all workloads are candidates for virtualization, and it isn't always easy to determine which ones are. Also, the cost of virtualization software forces high rates of virtual machine density to achieve return on investment, Enck said.
Vendors don't make virtualization decisions easy either, with battles between vendors, like VMware Inc., Microsoft Corp. and XenSource Inc. "Over the next two years, even just with hypervisors, there is going to be a plethora of choices. It will drive prices down, but the decisions will become even more difficult," Enck said.
Software and support around virtualization is another hurdle, as licensing terms have not been clearly established by many vendors.
Then, there is the fear of change. "Psychologically, virtualization hasn't been fully accepted. IT guys love virtualization, but people outside the data center, like the execs paying for it -- don't quite trust it yet," Enck said.
Byron Mathews, a senior manager of infrastructure services and delivery for TAP Pharmaceutical Products Inc., said he is experimenting with virtualization and has done a few successful migrations to consolidate some of the 300-plus servers in the company's data center.
Blade servers, on the other hand, haven't been well received by his staff of technicians. "We've gotten some blade servers, but they just seem to prefer the regular rack units to blades," Mathews said. "I think it's just a matter of preference."
A poll of approximately 150 attendees at the session found that 30% are not using blades, 40% limit blades to certain workloads and 23% plan broad deployment of blades.
Blade server considerations
The biggest server vendors are focusing energy and marketing campaigns around blade servers, but the advantages are questionable. "I am hearing from my clients that there is tremendous pressure from vendors, like Hewlett-Packard (HP) to buy blades. I don't know if this is warranted," Enck said. "I don't care if the blade is from Sun or HP or whoever, they aren't always the best answer. There is a lot more push than there is pull for this technology."
There are reasons to deploy blades other than vendors' high-pressure sales tactics, Enck said:
- blade servers platforms are easy to deploy,
- they allow for significant density,
- they are easy to repair and provision
- and the network and storage connections can be cabled and shared by blades in the same chassis.
Blade servers are also extremely proprietary, so choosing the right vendor is important. It can be a difficult choice for users. "I don't care how many times vendors say their blades are industry-standard servers, they are not. Blades are propriety, and once you buy, you are locked into that vendor's products," Enck said.
Blades cannot be configured the same way conventional rack-mount servers are, and some blades have few DIMM slots, so users may have to buy more expensive high-density memory, especially when using blades for virtualization. I/O interoperability can also be an issue, as blades don't always connect well to existing storage area network (SAN) and network environments.
Blades also pose a challenge to power and cooling because of the density they present. "I've seen high-end blades running 40kW per rack, and many data centers aren't ready for that, or ready to pay to run that type of rack … I've seen client after client get burned because of this issue," Enck said. Typical data centers run racks rated for 12kW-to-to15kW.
Sharing components also has a downside. A chassis failure could disable all the blade servers in that chassis. "It is considered a single point of failure, so some clients double up the chassis -- maybe put in seven blades per chassis instead of 14 -- adding to the cost of ownership," Enck said.
Blade servers on the upswing
Looking at the blade server timeline, the technology has improved. Enck said, the "First generation blades in 2000 were crappy. They were for Web servers. The second generation in 2002 was still cumbersome and not attractive for large workloads … Some vendors, like Dell, even dropped out of the blade market for a while."
Third and fourth generation blades have been much better, though. Power, cooling and other improvements from vendors, like Egenera Inc., have pushed blades ahead, and the platforms are able to support more types of applications. Enck said the blade servers being hatched this year and next are good for mid-sized databases and applications, and are finally becoming comparable to their rack-mount cousins.
Blade technology has changed quickly, with new generations out about every two years, Enck said. When considering blades, users should be sure to work with vendors so the platforms they invest in meet their current and future needs.
Let us know what you think about the story; e-mail: Bridget Botelho, News Writer
Also, check out our news blog at serverspecs.blogs.techtarget.com