Cut-through versus store-and-forward in Ethernet switch architecture

Switches in the data center can push frames using either store-and-forward or cut-through. Is one Ethernet switch architecture better than the other?

There are two distinct Ethernet switch architectures: Cut-through and store-and-forward.

Data is sent between Ethernet-enabled devices as a series of individual messages or frames. Frames contain a header, a data payload and an error checksum. The switch architecture determines how the frame transits an Ethernet switch. A cut-through device begins forwarding a frame after examining only the first part of its header. In contrast, a store-and-forward switch buffers the entire frame before making a forwarding decision. In general, this buffering before transmission can cause long delays.

Cut-through designs typically deliver lower latency, but there are drawbacks. The biggest issue is that cut-through switches will forward corrupted frames since they don't wait to see if the checksum at the end of each frame is valid. In contrast, a store-and-forward switch, having read the entire frame, can discard corrupted data, preventing it from entering the network and using resources unnecessarily. In large networks, forwarding such frames can be a significant problem, particularly across wide multicast or broadcast domains, where the corrupt data propagates over many segments of a network.

Cut-through Ethernet switch architectures can also limit routing and arbitration decisions in switches between source and destination endpoints, as inspection of the contents of the frame for input into those decisions is prohibited.

Switch makers can use a mixture of transmission techniques to ensure the best routing and arbitration decision and avoid forwarding corrupted frames, such as using store-and-forward on ingress but operating in cut-through in larger multi-chassis networks to improve performance.

The networking community has defined two methods (RFC1242 and RFC2442) to measure the unicast latency for store-and-forward and cut-through at a rate where no loss of packets is observed.

Latency at small packet sizes (64 bytes to 300 bytes) is a key metric for transactional applications, using UDP or TCP in high frequency trading or scientific and engineering codes using parallel programming paradigms such as Message Passing Interface. At larger packet sizes, throughput rather than latency is generally more of a performance indicator.

ABOUT THE AUTHOR:
Bob Fernander is the CEO of Gnodal, a supplier of high-speed switches for data centers. Fernander is focused on expanding the industry's use of built-in congestion-avoidance capabilities for large data sets, high-computational applications and massive storage demands prevalent in high-performance computing, cloud and "big data" environments, particularly among high-frequency traders and exchanges.

This was first published in December 2012

Dig deeper on Data center network cabling

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchWindowsServer

SearchEnterpriseLinux

SearchServerVirtualization

SearchCloudComputing

Close