Previous articles have introduced some of the techniques related to application acceleration, such as data reduction,...
compression, Quality of Service (QoS), latency mitigation and loss mitigation. This tip focuses on application acceleration in terms of data reduction.
A host of vendors (e.g., F5, Juniper, Cisco, Stratacache and Radware) are touting application acceleration services. Many of these services fall into the categories of compression, traffic shaping, and QoS. Gartner defines the market in categories of "application delivery controllers" (shapers, compressors) and "WAN optimization controllers" (data reduction).
How do these differ? QoS provides a mechanism for prioritizing delay-sensitive, real-time applications so that these applications are not affected by latency, jitter and packet loss. QoS does not reduce the amount of traffic that is transmitted over a link, however. QoS only provides the means of ensuring that the critical traffic has priority over the noncritical traffic. This can lead to noncritical traffic's being starved out.
Application acceleration can provide a tremendous benefit in terms of data reduction. Compression assists in data reduction, but you reach a point where there is really only so much compression you can achieve. The key advance for application acceleration is the caching of static content locally and the ability to emulate the server locally. This provides a virtual server environment while still providing rapid response times for application calls. The positioning of the static content locally has the effect of tremendously reducing the amount of data that needs to be transported across the wide area network (WAN).
How does this data reduction work in application acceleration (or, more specifically, WAN optimization)? Key to an application acceleration solution is the ability to cache the static content locally. This requires memory and disk space -- an appliance or a card in a router -- to replace the actual server. In most cases, the client-server transaction is interpreted locally and sent to the appropriate cache location for static content. This static content can be pre-positioned and updated on a regular basis.
In the event that the information the client requests from the server is not static content, the cache engine establishes a session with the server to gather the requested information. This strategy can greatly reduce the amount of client-server data that must be transmitted over the WAN.
This is a high-level overview of how it works. The real intelligence is the ability of the central data store (cache engines) to emulate a file server with the Common Internet File System (CIFS) and the Network File System (NFS) file services. This allows the clients to "believe" they are actually communicating with a server.
Application acceleration through data reduction may not be the right solution for all environments. You must consider the fact that in order to store the static content locally, there must be some form of memory, which usually translates into an appliance. Depending on the number of sites and the traffic patterns unique to your organization, compression, QoS and packet/traffic shaping may be more cost effective. If, however, you have a large number of client-server transactions traversing your WAN and eating up all of the WAN bandwidth, it may be more cost effective to move toward the locally cached content. A complete analysis of the current traffic patterns must be executed prior to looking at the many, many vendors that play in this space.
About the author:
Robbie Harrell (CCIE#3873) is the National Practice Lead for Advanced Infrastructure Solutions for SBC Communications. He has more than 10 years of experience providing strategic, business and technical consulting services. Robbie lives in Atlanta and is a graduate of Clemson University. His background includes positions as a principal architect at International Network Services, Lucent, Frontway and Callisma.