Fibre Channel over Ethernet (FCoE) technology allows Fibre Channelstorage data to be encapsulated for transfer...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
across a conventional Ethernet LAN, eliminating the expense and management burden of a separate storage network in the data center. With FCoE gaining acceptance, we spent some time discussing the technology with Dennis Martin, president of Demartek LLC, a storage industry analyst firm located in Arvada, Colo.
Q. What are the principal hardware requirements to facilitate FCoE deployment (network interface cards, switches, etc.)? Are there any hardware recommendations/suggestions that would enhance FCoE deployment?
Dennis Martin: In general, FCoE requires switches that have data center bridging (DCB), which provides the extensions to traditional Ethernet that make it suitable for transporting storage traffic in a lossless way. DCB capability is available in 10 GbE switches from a few – but not all – Ethernet switch vendors. Adapters that work with FCoE are known as converged network adapters (CNAs). CNAs are available from traditional Ethernet and Fibre Channel host bus adapter (HBA) vendors and support Ethernet and Fibre Channel simultaneously over the same wire at the same time. These CNAs run at 10 Gbps for both Ethernet and Fibre Channel.
Q. What are the principal software requirements to facilitate FCoE deployment (operating systems, switch software versions, drivers and so on)? How much of the software stack needs to be "FCoE ready?"
Martin: We've been running FCoE in the lab since 2008; primarily on Windows systems. FCoE is also supported on recent versions of Linux, Solaris and other operating systems. Each CNA adapter provides drivers that work in the appropriate environment. Many of the CNA vendors now use the same set of drivers for native Fibre Channel and FCoE. FCoE is also supported in VMware. There are some efforts underway to make FCoE as much a part of some operating systems as iSCSI is today, but it remains to be seen how widely this will be adopted.
Q. Are there any particular storage subsystem considerations needed to support FCoE? Are there any features of the Fibre Channel subsystem that readers should look for?
Martin: FCoE can be supported natively in storage subsystems, and some vendors have announced support for it. One storage vendor, NetApp, has had native FCoE support for some time. Others have announced support more recently. FCoE fabrics must interoperate with native Fibre Channel fabrics and FCoE must support all the Fibre Channel features. We have tested servers with FCoE CNAs connected to DCB/FCoE switches that also have native Fibre Channel ports connected to native Fibre Channel storage systems, and these worked as expected. At the storage system interface level, FCoE is Fibre Channel operating at 10 Gbps. The only difference is that it is connected to a DCB/FCoE switch, rather than a native Fibre Channel switch.
Q. What kind of management tools are appropriate for FCoE storage? How well is FCoE supported by third-party data center management tools, or is it better to focus on management tools that accompany the storage subsystem?
Martin: DCB/FCoE switches have their own interfaces for zoning, but depending on the vendor, these interfaces are similar to their respective Fibre Channel interfaces. The HBA/CNA vendors use the same management interfaces that they used previously for their adapters. Storage vendors that support FCoE make it look just like Fibre Channel, for the most part.
Although we haven't tested many third-party storage management software applications for FCoE compatibility, FCoE should look just like Fibre Channel to these management tools. The primary difference will be that FCoE storage will be connected to different switches than native Fibre Channel storage.
Q. What other best practices can you suggest for FCoE deployment?
Martin: FCoE is what I consider a “slow burn technology” – it should be considered for those planning new data centers or new server and storage build-outs. The biggest issue with respect to best practices for FCoE deployment is to get the Ethernet networking people to understand a little about storage networking, and the storage people to understand a little about Ethernet networking, as these have been two different disciplines until now. Things such as cabling, which are changed even more slowly than storage systems, need to be considered. For example, OM3 and OM4 cabling are suitable for FCoE and 10 GbE, as well as faster speeds for Fibre Channel.
We provide some good reference information for all the storage interfaces, including FCoE, on the Demartek Storage Interface Comparison page at www.demartek.com/SNIC or by searching for "Storage Interface Comparison" in any major Internet search engine. We also have the Demartek FCoE Zone at www.demartek.com/FCoE, where readers can find some test results for FCoE products.