Tip

Using TCP stack, segmentation, receive side scaling or checksum offload

Most servers require multiple network adapter ports to achieve quality network connectivity. Now, a variety of offloading technologies are taking network management off the CPU's to-do list.

With traditional networking, the processor handles networking tasks like assembling and disassembling IP packets, calculating and inspecting checksum values and so on. More network adapters mean more demands on the processor. Offloading works with suitable server operating systems, such as Windows Server 2008 R2, to move these tasks to the network adapter instead.

The principal offload technologies cover checksum, segmentation, transmission control protocol (TCP) stack and receive side scaling (RSS). Once administrators understand the offload options available today, they can select network adapters that meet their IT shop's needs.

Checksum offload

    Requires Free Membership to View

Checksum algorithms scan TCP and user datagram protocol packet data to catch errors. Checksums go with the packets to be validated at the receiving network adapter.

Active nodes exchange millions -- even billions -- of packets each day. If the server processor can offload calculating and comparing the checksums to the network adapter, it can improve system performance.

Select a network adapter that can offload checksum calculations for IPv4 and IPv6 sends and receives. Network adapters used for more secure communication may also be able to offload encrypted checksum calculation and validation from the server processor.

Segmentation offload

Data moves over the network in segmented, 1,448-byte packets, complete with TCP, IP and data link layer headers. The processor traditionally segments data and prepares the packets. For example, moving a 65,536 byte chunk of data would require the processor to create and send at least 46 packets.

If the network adapter performs segmentation offload, the processor can move the entire piece of data to the network adapter. This is often referred to as TCP segmentation offload (TSO) or large segment offload (LSO). Receiving network adapters reverse this process and extract the data payload without any direct intervention from the processor. A complete piece of data gets handed off to the server, in a step called large receive offload (LRO).

TCP offload

There is a strong argument in favor of TCP offload, or moving the entire TCP stack to hardware, out of the operating system where the CPU does all the work. In practical terms, this involves taking layer 3 (network/IP) and layer 4 (transport/TCP) down to the network adapter, which can perform a multitude of data organization and movement tasks via its TCP offload engine (TOE).

Invest in network adapters with full TOE capabilities when you can justify the cost in the number of CPU cycles freed up by the offload. A traditional network adapter running at gigabit Ethernet speeds can demand more than 70% of a CPU's processing capacity, straining applications on the server. Bandwidth-hogging network-based storage like iSCSI is a prime motivator for TOE adapters.

When full TOE capability is not needed, choose network adapters that enable a subset of TOE activities, such as checksum offload, TSO or LRO.

Receive side scaling

It takes time to reassemble the data extracted from individual packets -- especially if a processor is handling packets from multiple network ports and applications. RSS spreads out the packets among several physical processors (not cores), so that the same physical processor is always handling packets from the same TCP connection. When a processor is always working on the same data stream, it's a lot easier and faster for the receiving server to reassemble incoming data.

In most cases, file servers, Web servers and database servers benefit from a full suite of offload features including checksum offloads, segmentation offloads, TOE and receive side scaling. Other server types can enable offload features more selectively. Segmentation won't do much for an email server handling short messages; a media server won't need segmentation because the bulk of the server's effort is spent moving large pieces of data anyway.

Gauge server performance before and after enabling each offload feature so that benefits can be quantified objectively.

This was first published in November 2013

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

Disclaimer: Our Tips Exchange is a forum for you to share technical advice and expertise with your peers and to learn from other enterprise IT professionals. TechTarget provides the infrastructure to facilitate this sharing of information. However, we cannot guarantee the accuracy or validity of the material submitted. You agree that your use of the Ask The Expert services and your reliance on any questions, answers, information or other materials received through this Web site is at your own risk.