Server and desktop CPUs have historically taken the brunt of the blame for poor application performance. With the arrival of each new application, performance would slow and CPUs would be overtaxed, resulting in end-user complaints. Application developers blamed hardware for the performance bottleneck while hardware vendors said the culprit was bloated code. In other words, finger pointing reigned supreme in the days of single-core, single-CPU computing.
Fast-forward to the present day. Processor manufacturers have created CPUs that offer exponentially more processing power than just a few years ago. Today's servers and workstations are powered by high-performance, multi-core CPUs that support advanced capabilities, such as multithreading and parallel processing. Some software developers have take advantage of those new features, namely the creators of operating systems and virtualization platforms, with virtualization scoring the biggest benefit from multiple cores and multiple CPUs.
However, most traditional applications, especially those developed in-house, are not coded to use multiple cores or multithreading, which means the latest CPUs will not deliver a performance boost for those applications. That situation has changed the whole application performance landscape, and the finger now points in one direction -- at the application developers who do not take advantage of today's advanced CPUs.
Nevertheless, IT administrators and managers are now forced to make a decision on how applications should be sourced, developed and deployed to take advantage of the latest CPU technologies. Luckily, the answers are not limited to just re-developing applications -- new operating systems and virtualization may offer an alternative to re-coding the applications. The goal here is to improve application performance by using the features of new processors while maintaining efficient resource allocation and avoiding bloated system requirements.
Identify your multithreading options
It all comes down to multithreading -- creating solutions that can launch multiple threads (different parts of the application that the processor can work on simultaneously), which are executed across multiple cores. The concept itself is not new. Before multiple cores became widely available, higher-end servers would feature multiple CPUs, which allowed developers to create symmetric multiprocessing-aware applications. In reality, there is not much difference between SMP-optimized applications and multi-core applications. In other words, if the applications were coded to support SMP, then they will most likely work fine with a multi-core CPU. That is especially true of operating systems and virtualization products (server and desktop).
For example, Windows 7 treats multi-core processors as if they were SMP processors. In other words, the OS doesn't differentiate between a single CPU with multiple cores or multiple CPUs with single cores, and the ability to run multiple threads remains unaffected by the particular processor technology used. Other applications that use both SMP and multi-core servers include database servers (such as Oracle, MySQL and DB2), SAP applications and applications created using the latest development environments.
Applications that are multithreaded and processor-agnostic can change how servers are purchased and provisioned. In the past, organizations looking to maximize server performance would be required to buy large, expensive multi-CPU servers. The introduction of multi-core CPUs has changed the formula. In many cases, large, expensive servers can be replaced by server blades that incorporate a single, multi-core CPU per blade. Simply put, a single blade has the ability to replace a four-way (four-CPU) legacy server. However, server selection is not just about CPU choice -- traditional, multi-CPU servers also incorporate fault-tolerant technologies, such as multiple power supplies, lights-out support and several other capabilities that may be critical for business continuity plans.
The real decision comes down to how to improve application performance. For some homegrown applications, that may require re-coding or re-development work. It can be an expensive endeavor, but the development environment will determine much of the effort and expense. For example, Microsoft's .NET development environment simplifies multithreading via APIs included in Parallel Language Integrated Query (PLINQ) and the Task Parallel Library. In the world of Java, multithreading falls under the term "concurrency." The Java platform is designed from the ground up to support concurrent programming in the Java programming language and the Java class libraries. Since version 5.0, the Java platform has also included high-level concurrency APIs, found in the java.util.concurrent packages.
Considerations for multithreading applications
Nevertheless, it does take some effort -- and expertise -- to correctly enable multithreading in an application. It also requires some additional development tools. Some tools are included with the development environment, such as PLINQ, others are third-party options like java.util, while some operating systems contain useful utilities that can help with processor affinity and so on. Tools are normally used to identify thread imbalances, monitor core usage and track performance. Some testing can be accomplished using Microsoft's Task Manager, available in most Windows OS versions. The Task Manager can monitor CPU utilization in real time, allowing a programmer to get a quick feel for how an application is using threading. Beyond that basic analysis, programmers can choose tools such as Intel's suite of high-performance computing software tools. Intel also offers standalone software tools, such as the Intel VTune Performance Analyzer and the Intel Thread Checker application.
However, before writing a single line of new code, developers should determine a few things. First, is there a commercial application that may replace the custom developed application? Moving to a canned application can save a considerable amount of development time -- especially if the existing in-house design team is already busy with other projects or unfamiliar with the target programming language. However, canned applications can still be extremely costly in terms of maintenance and customization over time.
Also weigh the impact of any performance gains on the user community. For example, an application may offer far greater performance by re-coding for a multi-core/multithreaded environment, but if that application is lightly utilized to begin with, the improvement might never be realized, and that negates any value of the investment. On the other hand, the investment may be entirely worthwhile (maybe even critical) if you expect the application to experience significantly higher utilization in the near future.
Developers will find that re-coding an application is not the only path to increased processor utilization -- especially if the application runs on a server in the data center. For hosted applications, it may be better to improve server performance and utilization -- often by upgrading the operating systems and introducing server virtualization. Both tactics can exercise multiple cores and run as multithreaded applications. With server virtualization, applications can be dedicated to a single server instance, allowing maximum CPU throughput to be available, even under a single thread, to an application. What's more, multiple server instances can be set up for load balancing, which will help an application share cores across the virtual servers. Newer operating systems can solve application performance problems by using "processor affinity," where an application can be assigned to a particular processor. It's a quick way to improve performance simply by divvying up applications across cores.
While re-coding applications may provide the biggest benefit across the board, the benefit may not be worth the cost and effort after all. It will take careful planning, testing and hardware purchasing decisions to drive a re-coding project. In most cases, developers may discover that alternate solutions, such as virtualization, may be a more economical path to pursue, at least until an application outlives its usefulness.
This was first published in June 2010