A guide to application performance management
A comprehensive collection of articles, videos and more, hand-picked by our editors
Application performance management tools are used to help enterprises manage application activities within a cloud...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
environment, including Platform as a Service (PaaS), Infrastructure as a service (IaaS) and Software as a Service (SaaS). Tools can also identify the source of performance issues, and with advanced analytics, they can also identify some problems before they even occur. By using application performance management tools, the management and administration of complex IT environments can be simplified by linking silos of applications and infrastructure together—and presenting application activity across platforms or within a given platform in a unified view. This tip takes a closer look at these tools and illustrates the ways that they can aid data center administrators.
There are several different dimensions to application performance. The data collected and performance metrics assessed and analyzed depend on the type of tools selected.
Transaction-oriented monitoring tools
These tools monitor and capture each transaction as it flows across all of the infrastructure tiers, typically using server-based collection agents. Transactions are threaded together topologically and followed as they traverse through distributed cloud architectures. There are numerous tools in this category, including Correlsense Inc.’s SharePath for the Data Center, OpTier CoreFirst, Nimsoft Unified Manager, Inetco Systems Ltd.’s INETCO Insight and Nastel Technologies Inc.’s TransactionWorks.
These tools are ideal for transaction-oriented hybrid cloud environments where transactions must be traced through multiple layers that may span outside the data center. However, these tools will not provide the in-depth application component diagnostics that are described below.
Administrators can easily identify the infrastructure tier as the source of a bottleneck and deploy the appropriate resources, speeding problem resolution. Using this approach, cloud managers and administrators can view an accurate representation of transactional dependencies, transaction resource usage, transaction service levels and related questions, such as:
- How long did a business transaction take to complete?
- How much time was spent in each layer or tier?
- Did the transaction fail?
Some tools in this category will also provide user context by identifying the type of transaction (for example, a stock trade or online retail purchase), who initiated the transaction and the time of the transaction. These tools can answer questions, such as:
- Did my order go through?
- Has my return been processed?
- Did my trade fail?
- What is the status of a healthcare claim?
‘Deep-dive’ application component monitoring tools
Deep-dive monitoring tools provide component-level application diagnostics, typically using something called bytecode instrumentation. The benefit of using bytecode instrumentation is in the depth and breadth of application-specific information that is gathered, enabling application issues to be quickly pinpointed, diagnosed and resolved. The root cause of performance issues is easily identified and, in some cases, traced to the individual line of code. Examples of “deep dive” application monitoring include products like CA Technologies’s Application Performance Management (Introscope) and IBM’s Tivoli Composite Application Manager (ITCAM).
Deep-dive application performance management tools are best used for complex mission-critical enterprise applications and are well suited for large enterprises with a vast IT staff having specialized application expertise and a range of analytics tools to process the data collected. The downside is that they can be complex to install, implement and use. These tools can gather so much data that it is difficult to correlate and analyze the information, so small to midsized businesses may want to avoid these tools.
End user experience monitoring tools
These types of tools focus on customer experience by monitoring the response times of real users as they interact with Web-based applications, and should be employed by businesses that depend on user-driven Web-based transactions, such as eCommerce and online trading. IT issues can be linked to the effect they have on customer experience, helping to identify performance issues, predict customer behavior and assess the business effect of a particular issue.
These tools can answer these types of questions:
- What impact will an infrastructure change have on end user experience?
- Why isn’t my website meeting the needs of my customers?
- Why is a particular customer segment experiencing problems on my website?
Event processing/analytics tools
Application performance management tools, such as OpTier Business Events, Nastel AutoPilot and Netuitive, include a correlation/analytics engine that takes the raw data and event information gathered and turns it into useful information. Policies and thresholds are set that define when parameters go from “normal” to “abnormal.” These tools then apply these settings to collected performance data and display it on a dashboard.
Many tools use a topology view that highlights a failing component. The administrator can then drill down to obtain additional details about the problem (some will even have information to recommend a fix). Used in preproduction, the effects of a configuration change can be assessed prior to rollout. Applications can be simulated and tested before going live. In production, response times can be calculated and correlated to service-level agreement requirements, identifying any breaches. Data can be gathered for cost allocation and private cloud audits.
Some tools also include predictive analytics for proactive management by using dynamic thresholds that “learn.” For example, these tools could identify a server’s normal behavior based on historical data. The tool could then send alerts when it detects behavior it identifies as abnormal. Data is collected in real time from multiple sources and analyzed with algorithms and statistical regression techniques that know how all these indicators work together in different circumstances (for example, time of day). Automatically-generated models can look at how transactions behave through multiple tiers and applications. The tools compare actual performance to expected performance, allowing it to identify anomalies before a problem occurs. The more historical data collected, the “smarter” the analytics become.
Many of the application performance management tools described above include some level of analytics in their base product. Based on the type of data collected and correlated, those may be sufficient on their own. Large enterprises with complex, distributed hybrid cloud architectures may want a stand-alone product to complement existing monitoring tools.
Investing in an application performance management tool is a worthwhile investment for IT administrators with distributed cloud-based applications. There are many options available today that are based on budget and business requirements. Comprehensive tools combine transaction, application and end user experience monitoring in a single unified dashboard view.
About the expert: Jane Clabby has been in the computer industry for 25 years. She worked at both Data General and EMC Corp. in a variety of positions, including product management, marketing research, business development and communications. In her five years at Clabby Analytics as a research analyst, she has covered storage, storage management, grid computing, cloud computing and application performance. Jane received her bachelor of arts from Williams College and a master’s in business administration from Boston University.