Software should just run right out of the box. Right?
Yes. And Red Hat Enterprise Linux (RHEL) does just that, according to two Red Hat engineering experts.
But performance tuning, which they described as part science, part art, can boost output by anywhere from 10% to 40% or up to a factor of 10 times in exceptional cases, they said.
John Shakshober, a Red Hat senior consulting engineer, said that proprietary software companies don't give users the latitude to make adjustments. But Linux, in contrast, enables users to read the tech notes and work with the community to adjust their systems, solve problems and hone their own expertise, he said.
"Everybody wants to find ways to save money," Shakshober said. "They are always looking to get more performance. And by exposing users to the RHEL 'tunables,' we empower them to do their own tuning…and demonstrate their increased value to their companies."
Shakshober and Larry Woodman, a Red Hat consulting engineer, lead four-day performance tuning courses for advanced users throughout the year and offer a briefer version for intermediate users annually at the Red Hat Summit.
Performance tuning gets the most out of RHEL
"Linux is an IT workhorse for high volume/high throughput loads," Shakshober said. "Depending on the haul and latency, the Summit tutorials explain how to tune the knobs without writing code or building your own kernel."
The tunables make RHEL scalable and flexible, enabling users to adjust the algorithms to run workloads on multi-core and quad-core processors that weren't even imagined five years ago, Shakshober said. Memory, disk space, everything has had to scale in the interim, he said.
The briefer Summit talks provide specific information about tuning NUMA (Non-Uniform Memory Access) and multi-core processors, memory management, address spaces and maps, file systems and disk 1/O, monitoring tools and other topics.
The biggest source of performance degradation is depleted memory, Woodman said. Therefore, many of the tunables address this problem by enabling users to prioritize the process of memory reclamation, he said. With larger systems, there are fewer options for swapping memory so users are more likely to add memory instead. But, that, in turn, can create bottlenecks by overtaxing bandwidth, he said.
A key step in optimizing performance is to run affiliated applications and memory on the same NUMA nodes, keeping application traffic local and minimizing network bandwidth congestion, Woodman said.
Tuning falls into two categories: capacity tuning, which adjusts the resources to the requirements of a particular application, and performance tuning, which optimizes the system for speed, throughput or latency. The goal is to tweak the system so that the majority of all resources — CPUs, memory, disk space and network bandwidth — are used without being overextended, they said.
"We're trying to make RHEL easier to use out of the box, more automatic," Shakshober added. "But we'll never take the [adjustment] knobs away from the user."
Tweaking the OS
For example, K-Tune sets parameters for RHEL out of the box but still lets the user make fine adjustments, he said.
Falling memory levels, for example, are a warning sign that sys admins should check CPU utilization and system operations, Shakshober added. Tools included with RHEL will create what/if scenarios to determine if applications are using the optimal amount of resources, he said.
The top 10 CPU tools are
- ps aux
- mpstat-P all
- sar -u
The top 10 memory tools
- ps aur
- sar –r –B -W
- – /proc
The top five process tools are
- ps -o pmem
- strace and ltrace
The best disk tools are (and more are needed):
- iostat –x
- vmstat - D
- sar –DEV #
Oprofile, for example, is like a snapshot register that counts the quantity of actions that occur at a given time or during a specific CPU event and creates a histogram to pinpoint the cause of a problem.
Phil Hopkins, a systems administrator with Rackspace Inc., a Santa Clara, Calif.-based IT web hosting company, said Red Hat tools are a critical aid in helping customers get the most from servers, especially those running Apache Web servers and MySQL database.
By using tools to tweak the parameters of memory, disk or other processes, admins can gain 5% to 10% or more in performance, which is sometimes enough to delay the purchase of a new server, he said.
Hopkins finds the sar logs helpful in identifying what was occurring at the time of a problem. His other favorite tools include top and free-m for memory, ps –eLlf with multi-threaded applications, and hdparm, smartd and smartctl for disk performance.
In response to a question about similar Unix tools, Hopkins said that, in fact, many Linux tools have been ported to Unix due to the platform similarities, not the reverse.
What's great about open source and Linux, Shakshober added, is that while Red Hat bundles very good tools with the OS, RHEL has the hooks that enable users to run HP OpenView, IBM's Tivoli or other monitoring tools if they want fancier graphical user interfaces or more specialized information.
"We have the right infrastructure to layer other monitoring tools on top," Shakshober said. "They are reading the same OS counters and displaying the data in various graphical user interfaces. We don't want to dictate."
Let us know what you think about the story; email Leah Rosin, Site Editor.