Looking for something else?
40 GbE hardware is here, but enterprises aren’t adopting it. What needs to happen to push the networking standard into prime time?
Believe it or not, the 40 Gigabit Ethernet era is already upon us. The standard has long since been ratified, and products are shipping. But for the time being, 40 Gigabit Ethernet is having trouble moving out of first gear.
As data centers virtualize more of their servers and storage, the need for speedy network connections increases. But something happened on the way from 10 Gigabit Ethernet to 40 GbE: IT departments are sticking with the status quo and are taking their time upgrading their connections.
A few reasons for the delay include existing wiring infrastructure, where these faster Ethernet switches are placed on networks, slower adoption of 10 GbE (which has been mostly on servers) and the preponderance of copper gigabit network connections.
The standards for 40 GbE have been around for more than a year, and a number of routers, switches, and network cards already operate at this speed. Vendors such as Cisco, Dell’s Force10 division, Mellanox, Hewlett-Packard, Extreme Networks, Gnodal and Brocade offer such hardware.
The price per 40 GbE port is typically $2,500—about 500 times the price per GbE port. For example, the Gnodal GS0072, a 2U-high, 72-port, 40 GbE switch, sells for $180,000.
Analyst firm Gartner estimates that only one-fifth of total network interfaces are running at 10 gigabits per second, and around only 1% are running 40 GbE. With such a small share, 40 GbE could double in a few years
Goodbye, Cat 6 Cabling
Before that happens, though, there is the question of wiring up all those 40 GbE ports. This is the biggest issue for 40 GbE networks. Previous versions of Ethernet could use standard Category 6 (Cat 6) copper wiring and RJ45 connectors, which have been around for decades and are readily deployed. But not so for 40 GbE.
“I don’t know any enterprise who is running 40 GbE now,” says Mike Fratto, an analyst at Current Analysis. “What is holding most people back is that new cabling is required, and there is no easy way to upgrade it in the field, either.”
40 GbE runs on Quad Small Form Factor Pluggable cabling, or QSFP—a high-density fiber connector with 12 strands of fiber. Unlike standard two-strand fiber connections, it isn’t “field terminated,” meaning that an electrician can’t hook up a QSFP connector on site. Data center managers need to determine their cabling lengths in advance and preorder custom cables that are manufactured with the connectors already attached. (See “What does a 40 GbE cable look like?”)
“There is no way you can meet the [electrical and mechanical] tolerances with manually splicing 12 individual pieces of fiber,” said Kevin Cooke, the manager of solutions architecture at Teracai Corp., a value-added reseller (VAR) in Syracuse, N.Y., that has installed several 40 GbE networks. “This fundamentally changes the way these 40 GbE projects are managed. You can’t buy a bulk spool of glass and cut and terminate as you need it. This means you are going to pay more for premade cables, and you will have to measure these lengths more carefully, too.”
Cabling hurdles notwithstanding, implementing the latest and greatest high-speed networks can wreak havoc on networking teams and processes.
“The long-term future is in higher-speed Ethernet,” said Mike Pusateri, an independent digital media executive and a former technology executive at The Walt Disney Co. “But it isn’t a panacea, and you just can’t throw more or faster hardware at your network to make it run faster.”
At Disney, Pusateri wrestled with implementing faster networks to move gigantic 250 GB files around. When Disney first put in 10 GbE, the network actually ran slower. IT staffers found numerous network bottlenecks that needed resolution.
“You bump into things like immature network interface card drivers that can hang up your entire system,” said Pusateri. “It is like building an eight-lane superhighway with dirt off ramps and interchanges. Unless you upgrade everything, it is totally useless. You have to look at the entire system and understand all the relationships.”
Server Virtualization, Storage Lead the Way
Despite these challenges, one place where 40 GbE will find a home is in data centers with very high-density virtualized servers. Think telecom service providers or cloud-based hosting vendors that are installing blade servers. These organizations use gear like Cisco’s UCS that run dozens of virtual machines (VMs) on one box.
GbE Port Forecast
2011 - 7,700
2012 - 225,000
2013 - 569,100
2016 - 5,273,200
Forecasted number of higher-speed 40 GbE ports. Source: Dell’Oro Ethernet Switching report, August 2012
“Virtualization was the prime driver to 10 GbE, and it will be the prime driver for 40 GbE in the future,” said Eric Hanselman, an analyst at The 451 Group. “Once you start getting higher densities of virtual machines per servers, you need to have the network capacity to match it.”
Indeed, the sudden uptick in 10 GbE-connected servers may take some data center networks by surprise. “All the next generation servers have options for built-in 10 GbE networks now,” said Hanselman. “This has changed the pricing dynamic in servers, and as more of these come into corporations, the networking environments may not be ready for them. They may end up upgrading their server technology faster than their network core infrastructure.”
Another trend that is helping push 10 and 40 GbE forward is that storage-area networks are already running at these speeds with either InfiniBand or Fibre Channel over Ethernet. Numerous high-performance computing installations have adopted InfiniBand, and there are efforts to improve its performance and push it faster.
In addition, Internet hosting providers such as ProfitBricks.com have all-InfiniBand back ends for ultra-low-latency networks. As storage vendors come up with lower-latency connections, expect to see some competition with 40 GbE here in the future.
It’s also possible that some data center managers may skip 40 GbE altogether and jump straight to 100 GbE.
“40 GbE is something I’ve been considering for next year for our in-house SAN, although if I were investing now, I’d probably go straight for 100 GbE,” said Tony Maro, CIO of Evrichart, a medical records management company in White Sulphur Springs, W.Va.
The Top-of-Rack Uplink
One way to move into higher-speed networks is to use so-called top-of-rack switches, which aggregate just the servers in the cabinet below and are connected short distances to other top-of-rack switches.
Today, a typical configuration consists of gigabit connections to individual servers, and then 10 GbE to the core from each rack. These are being upgraded now to 10 GbE server connections and 40 GbE to the core.
“We will see more 40 GbE uplinks with true 40 GbE optics begin to ramp more aggressively in the data center,” wrote Dell’Oro Group in an August report on Ethernet switches.
The top-of-rack approach gets around having to rewire the entire data center and makes it easy to use precut QSFP optical cables. Teracai chose this method for one of its government clients.
“It was more cost-effective to do 40 GbE uplinks between switches than to use multiple 10 GbE uplinks,” said Cooke. They replaced 16 10 GbE uplinks with four 40 GbE links and saved $750,000 in connectors with no significant change in networking performance or throughput.
As more 40 GbE equipment enters the market, expect this price difference to increase, making it even more cost-effective to aggregate uplinks. Many analysts have forecast that the cost of 40 GbE equipment will drop to much less than four times the amount of 10 GbE in the near future, to the range of hundreds of dollars per port.
When that happens, look for 40 and 100 GbE to take off. Until then, high-speed Ethernet will continue to make small, incremental inroads into the most demanding environments.
“We are seeing the same basic pattern as with Gigabit Ethernet,” said Current Analysis’ Fratto. “The fastest networks appear in the biggest peering Internet providers and with heavier financial trading applications and don’t go mainstream until they are more commonly found on commodity servers’ motherboards.”
What Does a 40 GbE Cable Look Like?
If you think that 12 pieces of fiber in a 40 GbE connector are a lot, consider how much fiber 100 GbE network cables will need! High-speed networks represent a big jump in cabling complexity—a barrier to adoption, and why so many IT departments have stuck with copper connections for so long. It also means that any current investments in fiber cabling probably aren’t going to cut it for the next generation of high-speed networks.
The firm states in its report, “We expect that customers will gravitate to 40 GbE, not because they need all the bandwidth, but because two 10 GbE ports won’t be sufficient.”
However, these numbers are somewhat suspect. The report counts all QSFP ports using splitter cables into four 10 GbE ports as a single 40 GbE port, which really doesn’t show usage of the faster technology.
About the author:
David Strom has 25 years of experience as an expert in network and Internet technologies and has written and spoken extensively on topics including VOIP, email, cloud computing and wireless.
This article originally appeared in the December/January issue of Modern Infrastructure.