With the data center fabric wars fading, converged networking vendors are delivering server-agnostic, standards-based...
10 Gigabit Ethernet (GbE) wares, signaling to IT managers that it may be safe to sail in to those waters.
For years, battles raged as how best to converge storage and network traffic over Ethernet, with Cisco Systems Inc. and the Data Center Ethernet camp on one side, and Hewlett-Packard Co. and the Converged Enhanced Ethernet folks on the other. Those wars spawned proprietary converged network implementations, such as HP BladeSystem Virtual Connect FlexFabric and IBM BladeCenter Virtual Fabric, which many data center managers shunned for fear of vendor lock-in.
In the end–late last year–the IEEE standards body ratified Data Center Bridging (DCB), an architectural collection of Ethernet extensions designed to deliver the quality of service and in-order delivery necessary to carry storage traffic over a network. That freed networking vendors to deliver cards outside the confines of their server OEM partnerships, and may prompt IT architects to give 10 GbE and converged networking a second look.
Say hi to DCB, virtual partitions
Today at VMworld 2011 in Las Vegas, server connectivity vendors Emulex and Mellanox announced converged network adapter cards that support DCB to improve the performance of iSCSI storage. Emulex will showcase its OneConnect 10 Gb iSCSI adapter working with the Dell EqualLogic PS6510 and PS6010 10 GbE iSCSI storage arrays.
The new cards also feature partitioning capabilities designed to allow hosts running server virtualization to make better use of the cards’ ample bandwidth.
On the new Emulex OneConnect 10 GbE Universal Converged Network Adapter, the partitioning technology is called Universal Multi-Channel, and allows a single 10 GbE port to be partitioned into four virtual network interface cards that can be individually configured for protocol, bandwidth, quality of service and the like. Mellanox too is offering four-way partitioning for its 10/40 GbE ConnectX-3 card, dubbed Multiple Physical Functions (MPF).
By partitioning the cards, different traffic types, such as storage, live migration, management, can all share the same 10 Gb link, “without stepping on one another’s toes,” said Shaun Walsh, vice president of marketing at Emulex.
Previously, these sorts of partitioning capabilities were only available as part of a proprietary I/O virtualization platform, Walsh said. “Now it’s just part of our standard card.”
Emulex will be releasing DCB and partitioning support as a driver download for users of its OneConnect cards at the end of September, and Mellanox is sampling the ConnectX-3 card with MPF to select customers.
Staying the proprietary course
Meanwhile, HP is staying the proprietary course, enhancing its HP VirtualConnect FlexFabric offering. At VMworld, the company announced testing results for its Intelligent Resilient Framework (IRF), a technology designed to “flatten” networks, thus eliminating the need for an aggregation layer and providing more direct, higher throughput connections between end points. IRF runs on HP’s A5830 series of switches, and is included in HP’s new VirtualSystem offerings, also shipping today. With it, administrators can accelerate virtual machine mobility by 40%, and reduce network recovery times by more than 500 times, the company claimed.
HP networking and server rival Cisco is expected to announce enhancements to its Unified Computing System and Nexus 1000 V virtual switching platform on Tuesday. According to a session description, Cisco senior vice president for server, access and virtualization technology group Soni Jiandani will describe “major new virtual networking technology” for the Nexus 1000 V “that enables customers to quickly and easily create virtual networks that scale to support thousands of virtual machines while increasing application performance and security in multi-tenant private, public and hybrid cloud infrastructures.”
Check out our full VMworld 2011 conference coverage.