Xsigo makes way for I/O virtualization over Ethernet

New Ethernet-based I/O Director is seen as tacit admission that Infiniband hasn't won over enterprise data centers.

SAN FRANCISCO--Data center managers that have deployed converged networking and I/O virtualization from Xsigo Systems speak fondly of the technology, but the company's reliance on fringe Infiniband networking may have limited its adoption thus far.

At VMworld here, Xsigo unveiled an Ethernet version of I/O Director, which can consolidate I/O for any server with an Ethernet connection. That means customers will no longer have to install Infiniband host channel adapter (HCA) cards in to their servers to get the benefits of converged networks and I/O virtualization -- namely, less cabling and networking infrastructure, plus greater flexibility when it comes to reconfiguring systems.

The company also claims an advantage over competitive converged networking options from vendors like Cisco and Brocade, which rely on nascent standards like Fibre Channel over Ethernet and Data Center Bridging.

Xsigo will offer Ethernet-based I/O Director in two configurations. One, the VP560e I/O Director features 32 10 GbE ports and supports up to four I/O modules for a list price of $35,000, while the VP780e I/O Director supports up to 15 I/O modules for $45,000. General availability is scheduled for next month.

Infiniband an Infinibust?
In some environments, Xsigo and Infiniband make a lot of sense. Two years ago, New England Biolabs implemented Xsigo alongside its IBM BladeCenter-based HPC cluster. Compared with a Cisco solution, the company found that the Xsigo solution gave them I/O virtualization for far less total cost, said Tom Peacock, systems architect at the Ipswich, Mass.-based life sciences lab.

But he also found that running an Infiniband-based solution came with its share of minor annoyances.

For example, New England Biolabs can't use VMware Distributed Power Manager to power down underutilized servers, because that feature requires Ethernet Wake-on-LAN, Peacock said. Similarly, Infiniband HCAs don't support keyboard-video-mouse, complicating remote management.

In terms of pure compute speed, Infiniband offers better performance today than Ethernet, with 20 Gb/sec of throughput per cable to Ethernet's 10 Gb/sec. And while that throughput is great in HPC environments, "we don't need that [throughput] on most systems," Peacock said, for instance, a standalone Windows box running SQL Server.

And now, "with 10 Gb Ethernet out there, it begs the question 'Why?'" he said. "We were an early adopter of Xsigo," said Peacock, "but if both of these options were front of me today, I'd be tossed up."

New England Biolabs already invested in Infiniband I/O Directors for its HPC cluster, but going forward, Peacock said he might buy Ethernet-based Xsigo for his backup data center.

10 Gb Ethernet in the house Meanwhile, other vendors whose names are practically synonymous with Infiniband now offer Ethernet wares, which they are promoting heavily to enterprise data center managers visiting attending VMworld. Voltaire, for example, announced a new high-density 10 GbE at VMworld, the 6048, while Mellanox plans to demonstrate rapid VM migration over its ConnectX-2 EN 40 Gigabit Ethernet adapters.

"Infiniband has the highest bandwidth and the lowest latency and is adopted by apps that can reap the most benefit," said John Monson, Mellanox vice president of marketing. "Others might not need or desire to move to a new technology, and may just wait."

It's not that Infiniband doesn't have a role to play in enterprise data centers, it's just that that role is limited to niche applications, said Greg Schulz, founder of StorageIO.

"Infiniband is short of a no-go," Schulz said. It can be found in specialized enterprise applications such as Wall Street trading apps, "but it's not prolific."

Let us know what you think about the story; email Alex Barrett, News Director at abarrett@techtarget.com, or follow @aebarrett on twitter.

Check out all of our VMworld 2010 conference coverage here.

Dig Deeper on Data center capacity planning

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.