SPEC is a non-profit organization, so there is no profit agenda. The organization concentrates on one thing only: establishing, maintaining and endorsing standardized benchmarks to measure the performance of the newest generation of high-performance computers.
SPECmakes their highly-specific system test results available free of charge on their website (see a list of SPEC's categories). These results are the findings of specific system, software or hardware performance configurations when an organization has purchased and executed one of SPECS benchmark suites.
Any individual or group licensed with SPEC software may submit results to SPEC. There is a $500 publication fee for each result submitted unless you are a member or associate, in which case submission is free. In effect, vendors are most likely the ones submitting results in the hopes of promoting their products. However, SPEC rigorously reviews results before posting them, so you can be sure that the results are unbiased and reliable.
The results section is easily solely performance based. There is no direct information on ROI, energy consumption, footprint, service, support, or compatibility. If you are an IT manager worth your salt, you could definitely find a way to mathematically and systematically solve your server purchasing hardships using the dutifully prepared reports on the SPEC website coupled with good old fashioned research.
Make sure to read the FAQ in order to understand where the results are coming from and what they mean. The results are easily and scientifically reproducible if you have the same system specifications. However, most of us who live in the real world know that sometimes your network isn't perfect. There is always that rogue Windows 95 client attached to the dot matrix printer in the warehouse that's probably looking for more network viruses to infect it so it can eke out its paltry existence and feel as important as possible until its 250 MB hard drive fails.
SPEC benchmark suites
Again, if you live in the real world, and your business owner or client think it preposterous to not have an answer for them in 72 hours, purchasing SPEC benchmarks for your organization is a real world time saver. For anywhere between $50 and $2000, depending on the benchmark, an organization can purchase a benchmark suite that contains the executable code for testing a system configuration.
The process is simple:
- Go to SPEC website and locate the application you would like to test,
- purchase the SPEC benchmark, and
- load the suite into your system and run it.
After you review your results, you can then submit them to be reviewed by SPEC, which will publish them if they meet their standards of testing rigor and neutrality.
Benchmarks suites are designed to provide an accurate measure of performance based on real-world applications running on modern computing systems. While they are meant to be used as an appropriate gauge for performance in the real world, they should not be the sole determining factor for purchasing, since there are factors such as service, support, compatibility, pricing, etc. that come into play.
Benchmarks suites are designed to remain viable for about 5 years. During this time it is not unlikely to see upgrades that add further abilities or update documentation, as well as patches to fix minor errors. Using this system of checks and balances, SPEC standardizes benchmarks for portability and objectivity. When you purchase a benchmark, your results will show you first hand how your systems stack up to other available technology.
SPEC benchmarks are not a one-size-fits-all
The best way to determine system performance is to measure based on the actual applications you use on a daily basis -- but this is difficult and complex. The second-best is to run the appropriate SPEC benchmarks yourself on targeted system configurations to determine the results for applications and systems set ups that reflect what you do in the real world. You can use the web site results to get a gauge of how different system configurations perform in set ups similar to those you already have or might be considering for purchase.
Companies of any size can benefit from using SPEC benchmarks or the results posted on the web site as long as they are using the types of systems and applications for which the benchmarks are designed. Problems occur when users extrapolate results beyond the parameters of the benchmark. Before applying results to a purchasing decision, buyers should review what the benchmark does and the systems tested and make sure they correspond to what they are doing or want to do.
The results of the benchmark are very realistic, and a user should be able to reproduce them on a system with the same specifications. One should not, however, assume scaling based on the results -- the results are valid for the system(s) and workloads as tested, and should not be extrapolated beyond that.
The difference between SPEC benchmarks and others out there comes from consistency, repeatability and realism. The process of developing SPEC benchmarks comes out of consensus and a system of checks and balances that help ensure that one type of vendor is not favored over another. Results posted on the SPEC web site undergo rigorous peer review before being posted.
- CPU, which measures both flop intensive (speed of one process) and integer intensive (throughput of many processes) workloads. The CPU benchmark is broken into almost 30 individual benchmark results which target specific usages from Perl and XML processing to Artificial Intelligence and quantum chemistry.
- The Graphics/Applications benchmarks measure graphics at the OpenGL interface level as well as benchmarks designed for specific applications such as 3DS and Maya.
- The High Performance Computing benchmark measures the performance of high-end computing systems running industrial-style applications and is especially suited for evaluating the performance of parallel (several independent processors in one machine) and distributed computer architectures (several computers sharing the processes of one program).
- The Java Client/Server benchmark measures performance of Java Enterprise Application Servers and the Java Virtual Machine.
- The Mail Servers benchmarks measures the system's ability to act as a mail server using POP3 and SMTP protocols during client requests.
- The Network FIle System benchmark was designed to evaluate the speed and request-handling capabilities of network file servers.
- The Web Servers benchmark emulates users sending browser requests over broadband Internet connections to a web server, both in HTTP and HTTPS.
This was first published in April 2007