Get more from your data center by following iSCSI best practices

Following these iSCSI best practices can enhance the performance and reliability of iSCSI deployments in enterprise data centers.

As advancements in technology march on, so do the demands on your data center. A bottleneck in the storage network will hinder the effective use of server virtualization and hamper the corresponding ability to centralize virtual machine (VM) images, snapshots, virtual desktop instances and other content.

Today, Ethernet-based storage networks, such as iSCSI, have emerged as reliable and economical alternatives to traditional Fibre Channel networks. But iSCSI can still pose deployment problems and performance bottlenecks for unprepared IT organizations. Let’s run down a series of iSCSI best practices that can help businesses adopt and deploy iSCSI technology most effectively from the start.

Update the network to gigabit Ethernet or faster. The days of 10/100 Mbps Ethernet are long behind us, and almost all current servers, switches, and other data center infrastructure supports 1 Gigabit Ethernet (GbE); this should be the minimum network bandwidth requirement for iSCSI. Faster 10 GbE would be preferable to ease potential bandwidth contention between storage, application and user traffic on the same LAN. Review the network architecture to locate and remediate slower servers or identify opportunities to deploy 10 GbE in busy backbone network segments.

When gigabit Ethernet is employed, it may take multiple server-class network interface cards (NICs) to provide the bandwidth needed for demanding applications and multiple virtualized workloads. When 10 GbE is available for iSCSI deployment, performance may be improved by optimizing disk striping, formatting, provisioning and other tasks that focus on the storage array rather than the network proper.

Update network cabling to support the LAN. Copper Ethernet cables are the simplest, most popular and least expensive form of network cabling. However, GbE and 10 GbE can put serious demands on copper cables, so it’s important to follow up any network architecture review with a cabling review.

Technically, GbE can use Category 5e (Cat5e) and Category 6 (Cat6) copper cabling, while 10 GbE relies on Cat6 up to 55 meters, Category 6a (Cat6a) or Category 7 (Cat7) copper cables. Lower cabling categories may not support top speed for faster Ethernet, so plan cabling updates to take advantage of top LAN speeds, especially in network backbone segments.

Use a storage fabric architecture for resilience. Consider the need for storage network resilience when evaluating the network architecture for iSCSI. If resilience is needed, consider a fabric-type network design that interconnects redundant NICs, switches and other devices within the LAN. The goal here is to eliminate possible single points of failure in the network that might potentially cut off communication between storage initiators and targets – a catastrophic event for almost any enterprise application. Nonessential servers may not benefit from such resilience, but the added cost and complexity of a resilient network may be well worth the effort for mission-critical servers and workloads that demand reliable storage access.

Consider the network adapters for iSCSI. Traditional network adapters can impose significant processor overhead. This might not seem like a major concern for today’s multicore processors, but there are several technologies available to ease processor overhead and vastly improve iSCSI performance, especially when mixing storage and non-storage traffic on the same LAN. The best approach is to adopt enterprise-class "offload-capable" network adapters that provide TCP/IP offload or iSCSI offload capabilities.

TCP/IP offload capabilities are not new – in essence, the “offload” implements the TCP/IP stack in the network adapter’s hardware, alleviating those tasks from the processor. Many modern network adapters implement the more recent Microsoft TCP Chimney offload architecture available in all versions of Windows Server 2008, which also handles IPv4 and IPv6 connections. However, TCP Chimney may not be compatible with Hyper-V. TCP/IP offload-capable NICs are available for GbE and 10 GbE, and will help accelerate all types of network communication.

Similarly, network devices with iSCSI offload capabilities include their own iSCSI initiator hardware on the adapter that handles iSCSI traffic specifically. Host bus adapters with iSCSI offload are available for GbE and 10 GbE LANs.

Enable jumbo frames for iSCSI. Use network devices (adapters, switches, routers, storage targets and so on) that support jumbo frames. A normal Ethernet frame encapsulates a 1,500-byte payload in addition to the overhead of the frame. The overhead helps systems sort and reorder the frames and request resends when frames are missing or damaged. As a consequence, a great many individual frames (and a substantial amount of overhead) may be needed to transfer a file or other data across the network.

Jumbo frames allow a much bigger data payload in each Ethernet frame, and typical jumbo frames may transfer 4,000, 9,000 or even 14,000 bytes of data in each frame. This means the amount of overhead versus the data is much smaller, making the data exchange more efficient. However, each physical and virtual element in the network must support the same jumbo frame size. If not, noncompliant components will need to be upgraded (or the frame size adjusted) to achieve end-to-end compatibility.

Adopt Receive-Side Scaling technology. You’ve already seen that TCP/IP can place a burden on a processor, but an additional problem is that TCP/IP uses the same one processor core – it does not spread out the workload among multiple cores. This is due to the legacy design of the TCP/IP stack (from the days of single-core processors). Receive-Side Scaling (RSS) capability in the network adapter overcomes this problem by balancing incoming network frames across various processor cores. It is not a critical technology (especially if offload-type adapters are used) but it is strongly encouraged as a best practice for iSCSI.

Segregate the storage and LAN traffic. Although iSCSI allows storage and regular LAN traffic to share the same physical network, some organizations may still opt to separate the storage and non-storage traffic using virtual LANs (VLANs) or separate physical networks. This is particularly important in GbE LANs, but may not be essential in all 10 GbE networks.

For example, a VLAN allows a single physical network to be segregated into two or more logical networks. This is an ideal means of separating storage traffic from non-storage traffic, ensuring that storage traffic is only available to the server and storage subsystems.

It is also possible to create a separate physical LAN that is dedicated to storage traffic. This would include separate network adapters, cabling, switches and so on. This is the costliest option because the physical LAN elements are duplicated, but it also supplies full network bandwidth to storage. It also provides the best security because there is no chance for storage and non-storage data to mix on the same wires.

Dig Deeper on Data center budget and culture