Article

Is 10 Gigabit Ethernet catching up with InfiniBand?

Bridget Botelho
Fulcrum Microsystems Inc. of Calabasas, Calif., today announced additions to its FocalPoint family of 10-Gigabit Ethernet (GbE) switch chips whose low latency and cost rival that of InfiniBand.

Requires Free Membership to View

For more on 10 Gigabit Ethernet and InfiniBand:
Microsoft allows Voltaire InfiniBand for Windows servers

Cisco pushes for InfiniBand in the data center

According to Fulcrum, its new FM4000 offers the highest port density and lowest latency available, with 300 nanoseconds as a full Layer 3 or Layer 4 router as well as a full suite of data center Ethernet functions. The switch enables data center networks to scale into the thousands of nodes, according to Fulcrum.

It won't beat InfiniBand everywhere ... but there are cases where the differences are so small,
10 GbE is good enough.

Joe Skorupa,
analystGartner Inc.

The small, Menlo Park, Calif.-based tech company Arastra Inc. has also integrated Fulcrum's FM4000 offering into its DX7100 data center 10 GbE switches. Arastra's 1U 10 GbE switches offer submicrosecond latencies and throughput of up to 720 millions of packets per second (MPPS) and 960 Gbps per switch.

With the promise of low latency at a better price point, companies like Fulcrum are pushing 10 Gigabit Ethernet as a real alternative to InfiniBand. And as virtualization technologies have placed greater demands on networking in the x86 space, 10 GbE has become an attractive option as well.

"You need a much bigger, faster pipe to transmit data with virtualization," said Phillippe Levy, VP of marketing at Neterion Technologies, a 10 GbE adapter provider. "There are many bottlenecks that prevent VMs [virtual machines] from being broadly deployed, which is where 10 GbE comes in," he said.

GbE vs. InfiniBand: A real rival?
In terms of network performance, 10 Gigabit Ethernet is well suited for the MPPS-level performance needed for high-performance clustered-computing applications, according to Fulcrum, and it rivals the performance of specialty fabrics such as InfiniBand and Fibre Channel.

"General assumptions are that you need proprietary fabrics to get the desired performance, but a new generation of interconnects have changed that," said Mike Zeile, Fulcrum Microsystems vice president of marketing. "With today's [10 GbE] technology, if hosts are the same, the Ethernet has exactly the same latency as InfiniBand, taking the latency issue off the table."

When used in Clos network the density and low latency mean the network can scale to 3,456 nonblocking nodes in three tiers of switching – well beyond the amount when additional tiers or managed underprovisioning are introduced, Zeile said.

A study by Sandia National Laboratories comparing Ethernet and InfiniBand shows a 128-node cluster running identical applications showed degradation of InfiniBand, but no degradation occurred with 10 GbE.

Joe Skorupa, an analyst at Stamford, Conn.-based research firm Gartner Inc., vouched for Fulcrum's FM4000, saying it is "a really good part."

"It is very low latency," he said. "It allows 10 GbE, if you have the end-to-end-compatible systems, to compete effectively with InfiniBand," said Skorupa. "It won't beat InfiniBand everywhere; there are 20-Gigabit InfiniBand offerings out there now that will beat out 10 GbE, for instance. But there are cases where the differences are so small, 10 GbE is good enough, especially since it is aggressively priced."

And in addition to these performance gains, GbE's pricing is indeed competitive. Prices for 10 GbE switches, adapters and related infrastructure have decreased substantially because of standardization and the availability of low cost silicon, said Levy.

According to Dell'Oro group, which researches networking technology, the average selling price for a 10 GbE Switch port in 2Q07 was $2,700. In that same quarter of 2005, the cost was $6,100.

The flagship FM4224 device is priced at less than $25 per 10 Gbit port, in quantities of 1,000 pieces. Fulcrum's chips are currently sampling, with production shipments in the first quarter of 2008.

And Arastra Inc.'s DX7100 series of switches offer 1U 10-GbE switches with submicrosecond latencies and throughput of up to 720 MPPS and 960 Gbps per switch at a $400 per port list price, making it competitive with InfiniBand. The DX7100 is currently in beta trials and production quantities will be available in the first quarter of 2008.

By comparison, when IBM introduced 10 GbE connectivity for its BladeCenter servers in January, the 20-port 10-GbE switch from Blade Networking and the 10 GigE expansion card from NetXen cost almost $10,000 -- about $500 per port.

As for InfiniBand, Voltaire offers specials like its Grid Director ISR9288 10 Gbps InfiniBand Switch, 4X PCI-Ex host channel adapter (HCA) cards, and cables for as low as $495 per port.

"For a long time, 10 GbE was not deployed due to cost, but new technologies from companies like Fulcrum have allowed us to get down to near 1-GbE pricing and push it into the mainstream," Karam said.

Enhancing virtualization with 10 GbE
Despite what 10 GbE vendors say, InfiniBand won't be abandoned by any means. But 10 GbE is going to be largely adopted," said Gartner's Skorupa. "With server consolidation and virtualization, the need for 10 GbE has increased," he said.

In fact, Mansour Karam, director of marketing at Arastra, predicts that servers will eventually come standard with 10 GbE interfaces on the motherboard, putting cost-effective low-latency 10 GbE Ethernet switches in high demand.

And now companies like Sun Microsystems Inc. are using 10 GbE to enhance virtualization. Sun Microsystems, which introduced multithreaded 10-GbE networking technology earlier this year, is innovating to push the technology into data centers.

"We expect 10 Gig Ethernet technology to explode as people adopt these new processors, for low latency and more bandwidth," said Sandeep Agrawal, group marketing manager for networking at Sun.

Sun is working to improve 10-GbE networking technology further by reducing bottleneck issues that undermine the speed and efficiency of new multicore processors, Agrawal said. Sun wants to create virtual ports that applications would recognize as a normal physical port. This technology will be available from Sun in supported operating systems "very soon," but Agrawal could not be more specific.

"The next bottleneck showing up is the OSes across the industry that are capable of taking advantage of the additional granularity in 10 gigabit NICs [network interface cards], a problem we hope to solve."

Let us know what you think about the story; email Bridget Botelho, News Writer.

Also, check out our news blog at serverspecs.blogs.techtarget.com.


There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: