Getty Images/iStockphoto

Bust latency with monitoring practices and tools

Improve latency across your data center's infrastructure by exploring its causes, a handful of best practices and tools you can implement to improve speed and responsiveness.

Latency originates from two separate sources: a data center's network or its storage system. To reduce latency in your data center, consider its potential causes, then evaluate the various ways to troubleshoot it.

You can implement a variety of tools to help manage latency. Consider using latency monitoring software -- such as EdgeX Foundry or traceroute -- to pinpoint bottlenecks or keep tabs on network speeds, or adopt more latency-resistant hardware, including NVMe drives, persistent memory and SD-WANs.

Storage latency issues

Latency is the primary way to judge overall performance when it comes to storage systems. Low latency leads to faster transactions, which in turn leads to reduced storage costs for your business.

Storage latency comes from four main sources: storage controllers, storage software stacks, internal interconnects and external interconnects. You can reduce latency in each of these sources by selecting a fast CPU for your storage controller server, adopting storage software that prioritizes efficiency and CPU offload, implementing remote direct memory access networking and utilizing NVMe drives.

Persistent memory can also optimize storage and cut down on storage latency. It connects directly to the memory bus and offers two separate operating modes -- one to convert it to volatile memory, and the other to use it as a high-performance storage tier. 

Improve network latency

Network latency determines how long it takes between a request for data and the delivery of that data, which affects an entire infrastructure. High network latency can increase load times and even render certain applications unusable. Network latency usually comes from sources such as poor cabling, routing or switching errors, storage inefficiencies or certain security systems.

To improve network latency, start by measuring packet delay. Know how long it takes for your network to fulfill a request. Tools such as Ping, Traceroute and MTR can help you with this. Next, identify potential bottlenecks in your network. Depending on the source of your network latency, you can take steps such as improving routers or implementing network speed amplifiers. Finally, introducing nearby edge servers can also reduce networking strain and improve latency. Such edge servers can shorten the distance that a request packet must travel, thereby improving your system's response time. 

Latency in the cloud

Cloud latency can create significant issues for both organizations and end users. Distance often causes the majority of cloud latency, but equipment such as WANs can also create cloud latency problems.

Implementing SD-WAN instead of WAN networking can reduce cloud latency. Most SD-WAN offerings feature increased reliability, end-to-end security, and extensibility and management automation. You can improve remote connections, but SD-WAN requires virtual endpoint appliances.

Latency at the edge

Edge computing moves data and calculations out of the data center to edge locations. To minimize decision-to-action latency, some cloud providers have even moved their cloud environments to the edge. This process cuts out public commercial internet traffic to enable faster and more efficient delivery of services to customers.

However, due to its remote nature, the edge can present its own problems with latency. Software that monitors edge devices should measure latency in real time. Edge device monitoring services such as AWS IoT services, EdgeX Foundry and FNT Command all possess latency monitoring tools or features.

Monitoring distributed systems

When monitoring latency in large, complex systems, first ensure that monitoring latency won't increase latency. Synthetic monitoring and log monitoring tools can often do more harm than good when it comes to latency issues. Metrics- and event-based monitoring tools cause less strain in comparison but can also still increase latency.

You can ensure your latency monitoring tools don't negatively affect latency by evaluating and altering the sequence of scripts your monitoring tools run. This enables you to scale back on the frequency of latency testing and prevents your latency monitoring tools from creating issues.

Next Steps

Don't force your IT applications into edge computing

Are AWS Local Zones right for my low-latency app?

Deploy a low-latency app with AWS Local Zones in 5 steps

Dig Deeper on Data center hardware and strategy

SearchWindowsServer
Cloud Computing
Storage
Sustainability and ESG
Close