IT executives are looking to InfiniBand, a high-bandwidth scalable switched fabric communications link, to handle expanding server and storage workloads, according to a recent IDC study.
High-performance computing (HPC), scale-out database environments, shared virtualized I/O and increasing demands from financial applications will propel worldwide InfiniBand host channel adaptor (HCA) factory revenues from $62.3 million in 2006 to more than triple that -- $224.7 million -- in 2011, IDC reported.
Additionally, factory revenue from InfiniBand switch port sales is expected to grow from $94.9 million in 2006 to $612.2 million in 2011, the Framingham, Mass.-based company found.
InfiniBand enables HPC clusters
One InfiniBand provider, QLogic Corp., is banking the broad adoption predicted by IDC. The Aliso Viejo, Calif.-based company acquired two InfiniBand companies in the past year -- one providing adapters and the other switches.
"We absolutely are seeing the adoption of InfiniBand. The way we look at it, and our hope is, the high-performance market will continue to adopt InfiniBand because of clustering," said Frank Berry, vice president of marketing at QLogic. "HPC used to mean one huge supercomputer, but now the trend is clustering. The applications have to be carved up into pieces and run in parallel, talking to each other. Super high-speed communications are needed, and InfiniBand is designed from the ground up for the type of low-latency connection these clusters require."
Cisco Systems Inc., which offers InfiniBand switches and HCAs, is also seeing solid adoption of InfiniBand.
Within the HPC market, especially at the very high end of the market, organizations are purchasing InfiniBand based on its low latency, the availability of open source message passing interface (MPI) drivers and very aggressive price points, according to Bill Erdman, director of marketing, server virtualization business unit at Cisco.
Last week, Hewlett-Packard Co. (HP) announced it will offer Cisco double data rate (DDR) InfiniBand-based server fabric switches (SFS) as part of its Unified Cluster Portfolio. Additionally, HP BladeSystem c-Class servers are now supported with Cisco's InfiniBand host driver software.
The combined offerings accelerate business growth for HPC environments by providing high bandwidth, low latency, fabric stability and scalability, according to HP.
IDC overly optimistic
But even with players like HP adopting InfiniBand, the IDC's rosy forecast of InfiniBand uptake is too aggressive in Cisco's view. The San Jose, Calif.-based network provider is skeptical because there has not been enough adoption to aggressively trend InfiniBand adoption. The company sees InfiniBand's appeal for customers that need low latency, but not necessarily for I/O consolidation and virtualization.
"There are several vertical markets within enterprise that are considering low-latency switching solutions, including both InfiniBand and Ethernet. The low-latency benefit of InfiniBand is the No. 1 value proposition to these customers," Cisco's Erdman said. "Some of the other InfiniBand value propositions that the IDC report discusses, including virtualization, simplicity and I/O consolidations … are not the primary drivers for InfiniBand adoption. It is low latency."
The question then becomes, can InfiniBand move out of HPC and into generic enterprise computing?
"In the HPC market, every part of the software and hardware is optimized to the ninth degree to wring out every last drop of performance," Berry said. "In the enterprise though, multicore servers and server virtualization are being deployed at an explosive rate. The result is an aggregation of processing power that begs for high bandwidth and lower latency connectivity. InfiniBand is just starting to be deployed in these environments."
But even though InfiniBand offers high bandwidth, it isn't always the best choice, Berry said.
"We do see the adoption of InfiniBand and low-latency Ethernet switches within the database, back-end database hosting and message bus applications. However, with other applications, which require rich hosting services as mentioned above, Ethernet will remain the server hosting technology of choice given the rich services available and the highly competitive market for these services," Erdman said. "If you don't have a large cluster that requires (InfiniBand) type of latency, Ethernet is fine."
Berry said the IDC numbers are a tad aggressive, but is a bit more optimistic than Cisco about the potential rate of deployment, especially in the TOP500 supercomputing sites today.
The IDC points out in the forecast that use of InfiniBand increased from 5% to 12% in the TOP500 in just the last year.
"Users like the fact that InfiniBand is standards-based and cost effective (unlike Myrinet), and very low latency (unlike Ethernet)," Berry said. "Consolidation will likely drive InfiniBand adoption, as using three switches -- Ethernet, Fibre and InfiniBand -- complicates things.
"It's a bit premature to say InfiniBand will take over for Fibre and Ethernet entirely, but users will want to be able to use just one connection in the future," Berry said.
Let us know what you think about the story; e-mail: Bridget Botelho, News Writer.