A few modern network interface card options can help IT professionals improve network performance for key serv...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Ethernet is the predominant network technology for data centers and business local area networks (LANs), but it often struggles to support modern workloads like storage data, real-time voice and video. Ethernet was designed for a world of simple file transfers and small packets of data that rely on contention to access the wire. Even with vastly expanded bandwidth, Ethernet can be inefficient for time-sensitive traffic that is averse to packet loss.
Network interface cards (NICs), sometimes called network interface controllers, are evolving to include more features and intelligence that will boost network performance, including jumbo frames and offload capabilities, packet tagging, buffer and spacing tweaks, and more. Some of these NIC features have caveats for use in the data center, however.
Efficient CPU use: Jumbo frames vs. offload capabilities
If server performance is lagging, it could be because of network-intensive workloads. Standard Ethernet packets are 1,542 bytes; most files are broken into hundreds, thousands or even millions of packets or frames. These small packets, individually transferred across the network, efficiently use the wire and share the network with a multitude of nodes, but sending and receiving each frame requires CPU overhead.
Most NICs support jumbo frames, which means handling packets, or frames, of up to 9,000 bytes. Jumbo frames contain more data in each packet, so fewer packets are needed to convey data across the network. Throughput improves with less overhead -- packet headers and other packet content -- and CPU overhead shrinks.
Jumbo frames do have disadvantages. The administrator must configure every node on the network to support jumbo frames for nodes to communicate properly. Jumbo frames are not part of the IEEE standard, so different NICs configure jumbo frame sizes differently; it may take some experimentation to properly configure every node for jumbo frame operation. In addition, larger packets can add latency for some workloads because other nodes wait longer to access the wire, and dropped or corrupted packets will take longer to request and resend.
IT professionals may forgo jumbo frames in favor of NICs with large segment offload (LSO) and large receive offload (LRO) capabilities. LSO and LRO allow the CPU to transfer much larger quantities of data to (outbound) or from (inbound) the NIC with far less processing, essentially providing the same CPU performance benefit as jumbo frames.
Capacity traffic: Adjustable interframe spacing vs Ethernet upgrades
Ethernet waits a set time between each packet send, which is called interframe spacing. This gives other network nodes an opportunity to grab the wire and send a packet. Interframe spacing equals the time it takes to transmit 96 bits on the wire. For example, 1 gigabit Ethernet uses a standard interframe spacing of 0.096 microseconds, and 10 gigabit Ethernet uses one-tenth that gap, or 0.0096 microseconds.
This fixed spacing between transmission attempts isn't always efficient and can degrade network performance under heavy traffic conditions. NICs that support adaptive interframe spacing can dynamically adjust the interframe spacing based on network traffic, potentially boosting network performance. Adjusting the interframe spacing offers little benefit to network performance unless you're approaching the network's full bandwidth.
Comprehensive network performance benchmarking can reveal utilization patterns. If the Ethernet link frequently reaches capacity, upgrading to a faster Ethernet link or implementing NIC teaming will provide a longer-term fix than adjusting the interframe spacing.
Interrupt throttling for CPU performance
When packets move along the network, NICs generate interrupts for the CPU. At faster Ethernet speeds, the CPU interrupt rate increases, and the CPU must give more attention to network drivers and other software that handles the packets. If traffic levels spike and drop, CPU performance can become erratic. NICs that support interrupt throttling artificially reduce the CPU interrupt rate, freeing the CPU from unlimited NIC interrupts and potentially boosting CPU performance.
More throttling isn't necessarily better. You can slow a CPU's responsiveness with high interrupt throttling; it will take longer for the CPU to get around to handling all of the interrupts being generated. With a high rate of small packets coming in at close to real-time conditions, throttling degrades performance rather than enhancing it. Test network and CPU performance in various throttling modes until you can establish adequate system responsiveness while smoothing out interrupt demands on the CPU.
Alternatively, consider NICs with TCP/IP offload capabilities. These can handle many of the CPU-intensive tasks onboard, also reducing interrupt demands on the CPU.
Prioritizing time-sensitive data types: Enable packet tagging
Time-sensitive data types like Voice over IP (VoIP) and or video are frequently treated as higher-priority traffic than simple file transfers, but the network treats every packet on the wire equally. Implement packet tagging is enabled. Tagged packets can then be sorted into a traffic queue set up by the operating system (such as Windows Server 2012), pushing these higher-priority VoIP and video packets to the front of the line to be handled before other lower priority packets. Packet tagging is instrumental to Quality of Service (QoS) strategies, and is an essential part of many virtual LAN (VLAN) deployments.
Only apply changes to NICs if network performance falls below defined benchmarks, and always roll out changes after controlled testing with server and NIC benchmarking. These recommended NIC adjustments won't make the dramatic improvements that a network overhaul accomplishes, but they also aren't limited by budget or logistical concerns. Evaluate network performance changes over time and look for any unintended consequences, such as a boost to one workload but a detriment to others.