Kick application latency off your network

Slow, fussy application? Run through this list of application latency and bandwidth fixes before spending the money to augment network capacity.

Bandwidth and latency complications perennially slow application performance over a network.

Application delivery to endpoint client systems requires LAN connectivity at the least, and increasingly relies on WAN connections between distributed enterprise data centers.

As applications exchange greater quantities of data and more systems compete for network access, some applications get starved for bandwidth. Servers routinely use Gigabit Ethernet network ports, and LAN architectures include 10 GigE backbones to carry more data from a large number of systems. Even faster 40 GigE and 100 GigE technologies are available for LAN connectivity if that isn't enough.

Long physical distances between sending and receiving systems, as well as network complexity, cause application latency. For example, network latency is low when data is sent between two servers in the same rack through a single switch port. By comparison, expect several seconds of latency when two servers on opposite sides of the world exchange data across a dozen router hops. The effect of latency on application performance can be additive, as countless packets are transmitted and dropped packets are resent.

Fixing application performance on the LAN

When local network performance problems arise with a new or recently updated application, investigate the application's configuration, system compatibility and software status; also review its installation and setup documentation. For example, if the application supports bandwidth throttling, check that bandwidth was not inadvertently throttled to the point of preventing communication.

Hardware compatibility influences LAN efficiency. If an application achieves low latency due to jumbo frames, for example, check that the appropriate network interface card (NIC) adapter and driver are installed. In some cases, an update or patch reverses poor performance.

With an application installed, patched and configured properly on a hardware-compatible system but still underperforming, consider other options. The problem is often an overconsolidated server, where too many applications vie for network access without enough NIC ports. Adding NIC ports provides server connectivity to additional workloads. NIC teaming delivers bandwidth aggregation to critical applications. Workload balancing moves demanding applications to underworked servers where less bandwidth contention boosts performance.

Via NIC teaming, an application spans multiple NIC ports and aggregates the bandwidth between them. For example, an application will garner up to 2 Gbps of bandwidth over two GigE NIC ports.

Workload balancing, another option, migrates virtual machines between servers to optimize each host's application workload and bandwidth demands.

Data centers can also replace a NIC port with one that is 10 GigE or greater, or add a separate NIC adapter and assign a troubled workload to the high-bandwidth NIC port. However, faster NICs are extremely expensive, require physical installation that can take a server offline, and usually impose collateral expenses in LAN switching infrastructure. For example, if you install a 10 GigE NIC on a server, you'll also need a switch with 10 GigE ports.

To diagnose LAN connectivity problems, compare current application performance levels to established benchmarks of the same application in its known-good state. If the application's performance has not substantially degraded, the trouble may be elsewhere, such as a switching problem.

Improving WAN-based performance

Enterprises exercise detailed control over application performance within their LAN, but that doesn't extend to the WAN. The WAN comprises multiple service providers using high-end carrier backbone infrastructures. WAN carriers also mitigate application latency by selecting shorter and thus more efficient routing paths, deploying low-latency switching and routing equipment, and aggressively avoiding equipment downtime.

Increasing WAN bandwidth will enhance application performance, but it's a pricey and often unnecessary option.

In actual practice, application performance over the WAN improves via various technologies that make more efficient use of the available WAN bandwidth. These technologies are collectively known as WAN accelerators. Accelerators work by reducing the data payload and using available WAN bandwidth more efficiently.

WAN acceleration products are often physical appliances like those in the Riverbed SteelHead family or the F5 BIG-IP Application Acceleration Manager. These dedicated appliances deploy at both ends of the WAN connection. Software-based versions of these tools also exist for virtualized servers, performing many of the tasks found in dedicated hardware.

Some compression algorithms specifically tailored for particular data types dramatically improve application performance with no bandwidth changes. Compressed data can be exchanged using far less bandwidth than uncompressed data, in the same way data compression increases storage capacity without additional disks.

If you depend on WAN connectivity, cache frequently used data locally. Microsoft Windows Server offers Branch Cache technology, while various other third-party tools can create remote caches as well. Caching frequently used data at each destination helps avoid bandwidth-hungry retransmissions. Before transmitting a file, the sending side queries the destination cache: If the file is already cached, the destination simply plucks the data from the existing cache; if not, the file can be sent. Advanced cache options protect key files to prevent more important data from being flushed by newer files, ensuring that the most important data remains cached.

Performance is sometimes disrupted by frequent packet loss and resend events. Proactive error correction lets the destination network end-correct packet problems without retransmission. Other approaches reduce data by removing superfluous content from Java script or style sheets or reducing lossless images to lossy compression to reduce file sizes.

This was first published in May 2014

Dig deeper on Data center network cabling

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

Related Discussions

Stephen J. Bigelow asks:

Is network capacity tight in your data center?

1  Response So Far

Join the Discussion

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchWindowsServer

SearchEnterpriseLinux

SearchServerVirtualization

SearchCloudComputing

Close