Big data and other information-intensive applications are pushing enterprises into high-performance computing,...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
which relies on parallel processing.
Widespread for science and engineering applications, high-performance computing (HPC) systems also fit the needs of big data processing, cloud-based architectures, growing data volumes and new integrated system designs.
These compute workhorses may be foreign to your enterprise data center staff. Various HPC options suit different enterprise use cases, but all high-performance computing applications require special optimizations that aren't always common in a traditional data center.
Review these five ways to make HPC applications perform at the highest possible level.
1. Commit it to memory
High-performance systems rely on parallel processing systems, so information needs to move quickly in and out of memory. HPC systems tend to be I/O-intensive, so choosing the right memory configuration significantly impacts application performance. Companies cannot specify memory size based on simple "GB per core" rules; memory type is equally important.
HPC systems rely on dual in-line memory (DIMM) modules, which have parallel system designs.
There are three types of DIMMs available: UDIMMS, RDIMMS and LRDIMMs. Unbuffered DIMMs (UDIMMs) are fast, inexpensive and unstable when working with larger processing loads. Registered DIMMS (RDIMMs) are stable, scalable, costly, and put less of an electrical load on a memory controller. They are also used in many traditional servers. Load-reduced DIMMS (LRDIMMs) feature a memory buffer instead of a register, allowing them to increase the memory speed, reduce the load on a server's memory bus, and lower power consumption. LRDIMMs cost considerably more than RDIMMs, but are often found in high-performance computing builds.
2. The upgrade path
High-performance computing applications are growing rapidly, so future expansion of the system is of paramount concern.
One big difference in HPC system design from traditional data center infrastructure is choosing off-the-shelf tools or custom systems. Off-the-shelf systems are expandable only to a small degree, limiting future growth. Custom builds have an open-ended design, so corporations can extend functions in the future. However, extra functionality often comes at a price: the custom systems cost more initially than those purchased off the shelf.
3. Take advantage of HPC
HPC application design is different than traditional design. Developers break down information flow into parallel groups.
Who's converting to high-performance systems?
In May 2014, Technology Business Research estimated that U.S. organizations with at least 500 employees would spend, in total, $1.7 billion on hyperscale servers over the next 12 months.
The difference in performance can be dramatic. For instance, a multi-threaded vectorized application running on Intel 2.6 GHz processors -- each with eight cores -- delivers close to 166.4 Gigaflops per second -- very close to the system's maximum performance. If the same HPC application presented information in serial fashion and was not vectorized, it would deliver 2.6 Gigaflops, or 1.6% of the possible performance. The difference multiplies as the number of processors increases. Data centers converting to high-performance computing need to fine-tune their software as much as the hardware.
4. Keeping the system in sync
Configuration with an HPC system is perfected when the system first launches, but quickly becomes out of sync as it ages.
Multiple systems administrators that work on the clusters all make different configuration selections. Changes are sometimes made but not documented, so administrators don't always know which applications are running. Components tend to go bad, so new ones are installed, all causing applications to fall out of sync.
When elements are inconsistent on a cluster, the HPC administrators could see sporadic anomalies and changes in performance that impact applications. Given the potential changes, IT shops need to implement policies that identify what applications their HPC systems are running, and find ways to get the configuration back in sync. These checks should be completed quarterly or no less than twice a year.
5. Pay attention to energy
A report from Partnership for Advanced Computing in Europe stated that, in the last 15 years, the cost for energy along with the density of high-performance computing rose sharply. It is now common for machines to consume over 30 kW per rack, and that number continues to rise. Because of the high density, efficient data center infrastructure and cooling systems are critical.
The U.S. Department of Energy's National Renewable Energy Laboratory is one HPC adopter at the forefront of such initiatives. In its data center, high-voltage (480 VAC) electricity is supplied directly to the racks rather than the typical 208 V step down, which saves on power electronics equipment, power conversions and losses. Energy-efficient pumps replace noisy, less-efficient fans.
About the author:
Paul Korzeniowski is a freelance writer who specializes in data center issues. He has been covering technology for two decades, is based in Sudbury, MA and can be reached at email@example.com.