Virtual Data Center

Choosing the best storage network for virtualization


Manage Learn to apply best practices and optimize your operations.

Tactics for Managing iSCSI and Fibre Channel over Ethernet

Fibre Channel over Ethernet and iSCSI are prompting data center managers to turn to copper Ethernet cabling for high-speed server-to-storage networking.

Fibre Channel cabling has come … and gone. Copper, the long-time friend of IT professionals, is the new master of the data center.

In use since almost the very beginning, copper Ethernet cabling is now seeing a second utility in modern data centers. Technologies such as iSCSI and the bleeding-edge Fibre Channel over Ethernet (FCoE) now make possible the use of copper Ethernet cabling for high-speed server-to-storage networking.

This is good news for data centers. If you’ve been suffering from the management burdens of Fibre Channel cabling, now is the time to get excited. The IT industry is swiftly developing new tactics that elevate copper Ethernet cabling into a more desirable solution. Better yet, you’ll get that highly regarded result no matter what kind of storage area network (SAN) you have in place or are thinking about buying.

Copper’s storage story is swiftly changing in today’s IT environment. If you’re confused about today’s best practices in deploying and managing it, consider the following tactics:

Select a SAN platform. Copper’s ascendency from a second-rate solution for low-performance storage to the medium of choice is a story that has unfolded quickly. It was not that long ago that iSCSI was a laughable protocol, relegated to workgroup-level storage devices and organizations that couldn’t afford “real” SANs. Today, with the industry embracing aggregation protocols like Microsoft MultiPath IO (MPIO) and multiple connections per session (MCS), as well as the introduction of 10 GbE (gigabit Ethernet), iSCSI’s vendors now assert that their platforms are at a minimum on par with Fibre Channel.

That’s not to say that Fibre Channel SAN platforms are going away any time soon. The classic story for the two mediums equates to a trade-off between price and performance. In that story, iSCSI represents the low-cost leader.

Typically, you can buy into an iSCSI SAN at a price point that is lower than an equivalent Fibre Channel SAN. On the other hand, Fibre Channel tends to offer greater performance but at a greater cost.

The argument for Fibre Channel’s increased performance tends to rest in iSCSI’s need to wrap storage commands inside TCP/IP. That added protocol reduces its effective performance. Some iSCSI SANs, however, can actually be more expensive than their Fibre Channel equivalents. At the same time, some Fibre Channel SANs can be of lower performance.

But don’t standardize on one SAN platform. The realities of today’s business investments make the SAN selection decision less important. A January 2009 Forrester survey asserted that virtualization does not appear to be a driver in a SAN purchase. Eighty-nine percent of interviewed clients did not incorporate a SAN as a result of implementing a virtual server environment.

In short, the SAN you use for virtualization—Fibre Channel, iSCSI or FCoE—is likely a decision that’s already been made for you.

Figure 1: A QLogic 10 FCoE CNA is seen as a network adapter in vSphere.

That important point brings forward the notion that SAN standardization may soon be a thing of the past. Both iSCSI and FCoE encapsulate SCSI storage commands over copper Ethernet cabling. Its encapsulation enables data centers to pervasively route storage traffic anywhere with a network connection.

Monitoring that storage is obviously a task that is still required. But, as long as your virtual servers have the correct drivers installed, available storage appears via a network connection than any dedicated cabling.

Figure 1 shows how a QLogic 10 FCoE Converged Network Adapter is presented as a network adapter inside VMware vSphere.

Begin exploring Fibre Channel over Ethernet. As a storage protocol, iSCSI is becoming a mature technology, but Fibre Channel over Ethernet is today’s emerging protocol. FCoE drivers are generally not available in the box in today’s virtualization platforms. They require separate downloads.

Many first-generation cards and drivers don’t come with the measurable performance boost that makes it compelling to migrate off existing hardware. Further, many early-generation cards and drivers experience idiosyncrasies with their installation and that limits a company’s interest in making the jump.

But none of these early issues should drive your data center away from FCoE. Rather, they should highlight the fact that smart organizations are beginning to explore the FCoE medium now in preparation for its effectiveness in years to come.

Why? For a simple reason: If you use Fibre Channel SANs today, the most cost-effective approach in moving to copper cabling is in keeping that Fibre Channel SAN investment in place—or at least parts of it.

Remember that a Fibre Channel SAN is part disks and part connecting medium. Fibre Channel over Ethernet seeks to replace Fibre Channel’s cables with easier-to-manage copper cables. Swapping out your Fibre Channel switching equipment for a copper alternative might be a trivial activity in the future.

So, plan now because most data centers want to eliminate the duplicate cabling—copper plus Fibre Channel—that currently runs underneath their data center floors.

Migrate to 10 GbE as soon as possible. The velocity of both SAN approaches—iSCSI and FCoE—point toward a future that runs atop 10 gigabyte Ethernet (10 GbE). Yesteryear’s IT workloads rarely needed more than a single gigabyte of network connectivity because one physical server rarely ran more than one IT workload. Except with certain high-performance applications, network throughput was easily handled by a single gigabit.

Yet virtualization’s aggregation of multiple workloads onto each server also consolidates network traffic needs. Add to that network traffic the much greater storage traffic, and you can easily see that single gigabit becoming a bottleneck.

Most data centers today get around the gigabit limit through network card aggregation. iSCSI has long leaned on existing Ethernet protocols for link and port aggregation. This was necessary because of iSCSI’s introduction during a time period when a single gigabit was the norm. Although today’s FCoE cards run with a 10 GbE throughput, aggregating cards is still necessary for high availability.

A technology that is currently hitting its prime, 10 GbE continues to grow, with commoditization making it an ever-sooner reality. Because both iSCSI and FCoE work best—or, in the case of FCoE, work at all—atop 10 GbE, your data center should begin exploring the necessary infrastructure enhancements now.

Greg Shields is an independent author, instructor, Microsoft MVP and IT consultant based in Denver. He is a co-founder of Concentrated Technology LLC and has nearly 15 years of experience in IT architecture and enterprise administration. Shields specializes in Microsoft administration, systems management and monitoring, and virtualization. He is the author of several books, including Windows Server 2008: What's New/What's Changed, available from Sapien Press.

Article 3 of 3

Dig Deeper on SDN and other network strategies

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

Get More Virtual Data Center

Access to all of our back issues View All