This tip is the third in a series on measuring server performance. Read part one on best practices for server benchmark...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
testing and part two on conducting benchmark tests.
There is no single benchmark tool that can meet all of a system administrator’s performance-measuring needs. Admins should turn to several tools to perform benchmark and metric testing and compare the results to verify accuracy. In this tip, we'll outline the role of stress testing in benchmark efforts and summarize several popular tools available to engineers.
Benchmark testing under stress
Stress testing, often referred to as load testing, allows engineers to test the stability of their environment without placing the server in an actual production environment.
In a real-world scenario, the engineer should test the performance of a server based on unique server contents and the type of applications being run. Benchmarking and running metric tests often revolve around simulating applications that will be run on the server to stress test an operating environment and the hardware it runs on. The goal is to simulate a real-world environment as best as possible. This means looking at user loads, testing network traffic, process utilization, memory allocation, etc.
More from this series on server benchmark testing
Testing a machine in a simulated environment will give the engineering team the freedom to move resources around within the infrastructure. Oftentimes, a stress test is skipped and a machine is placed into a live environment for a “real load” test. Although this can work, you will run the risk of now having to modify a production server with live data on it. On the other hand, it’s much less stressful to make changes to a machine in an isolated state with not much depending on it at the moment.
Remember, these tests are being run in an artificial setting and therefore, the results will seldom exactly match the metrics of the server in a live environment. Engineers should never assume that their servers will run the same in a live setting as they do in a test environment. The key point to remember is this: Any variable added to the simulated environment will affect server performance. Whether the engineer adds 1 GB of RAM or an additional user to the server, metrics may become affected.
As mentioned in a previous tip, Performance Monitor (PerfMon) is a great benchmarking tool that is built into the Windows OS and graphically displays statistics for a desired set of performance parameters, called “counters.” Admins can also update the available counters when they install services and add-ons to the server.
There are many counters to choose from and each will depend on what you are trying to test. After you choose your counter(s), PerfMon will then create a visual graph and update its timing intervals. The user can configure the intervals, but the default is set to 1 second. Recording the information in a log file will prove useful, and you can also set PerfMon to send alert messages when certain events occur. Admins can configure PerfMon to send emails and updates when thresholds are reached, such as when a CPU processor time reaches 99%. Remember, PerfMon isn’t just a physical hardware assessment tool. Many engineers will also utilize PerfMon in a virtual environment as well.
Intel’s Performance Counter Monitor
If the benchmark test is strictly focused on CPU performance, Intel Corp. has a great feature built directly into its processors that allows engineers on Windows and Linux systems to see how well their devices are operating.
According to Intel, the advanced feature set is available in the current Intel Xeon 5500, 5600, 7500, and Core i7 processor series. Intel states that its Performance Counter Monitor "provides sample C++ routines and utilities to estimate the internal resource utilization of the latest Intel Xeon and Core processors." This gives the engineer insight into how his processors are operating. An engineer can more easily decide if he needs to throttle or overclock the processor, or simply add additional processors to the environment as needed. nce monitoring of physical servers, but there will be situations where a virtualized server and its workloads need to be benchmarked as well.
VMmark 2.x helps engineers determine the performance of virtualized data center environments based on readings from "tiles," which organize your VMs into groups and flood host machines to see how well the workloads perform.
VMmark gives you a benchmark score once the test is finished. First, it uses normalized workload data – through its own reference system – to compute average scores per tile. Then, it adds up those scores to give you a final score.
A great benchmarking tool that is widely used in the industry, uptime software Inc.'s server performance metric software is able to graph and visualize all critical server resources within a data center. Utilizing the software, an engineer can set metric tests based on CPU, memory, disk, processes, workload, network, user, service status and configuration data. Agent-based monitoring is also available, which greatly helps the ongoing process of metric gathering and benchmark testing over time. Much like running on a physical box, these agents can also be deployed on virtual machines to gauge their performance.
ABOUT THE AUTHOR: Bill Kleyman, MBA, MISM, is an avid technologist with experience in network infrastructure management. His engineering work includes large virtualization deployments as well as business network design and implementation. Currently, he is the Director of Technology at World Wide Fittings Inc., a global manufacturing firm with locations in China, Europe and the United States.