Supercomputers come off the shelf, skills don't

As high performance computing moves into the commercial sector, data center pros struggle to keep up with the skill sets necessary to take advantage of the new HPC paradigm. Experts say invest in yourself and your IT staff's education.

No longer isolated to academic or government labs, high performance computing (HPC) has filtered down into the commercial sector with concrete results. Whether or not you're staff will be ready for HPC is up to you.

Experts say it's only a matter of time before supercomputing technology, specially configured clusters running deep computing applications, makes its way into the mainstream data center. The hardware is becoming commoditized, but it still requires a specialized skill set which some experts say will likely delay adoption.

Regardless of the roadblocks, supercomputing technology, which used to be relegated to academic and scientific communities, is being used today in a number of commercial applications.

  • Computational fluid dynamics programs are used to design aerodynamic vehicles for fuel efficiency.
  • Many preliminary car crash analyses are done through supercomputing.
  • Some car manufacturers are using acoustic calculations to keep the ride quiet.
  • Companies like Proctor and Gamble are using HPC to figure out how to fill a toothpaste tube.
  • Oil companies are mining seismic data for resource exploration.
  • Drug companies are designing drugs and modeling how they will interact with our bodies through computational chemistry.
  • Engineers and architects use HPC to design highways and airports and to predict traffic bottlenecks.
  • Financial institutions try to predict complicated financial markets to see how to best balance financial diversification.

"The automobile industry is a prime example of a commercial industry buying into HPC. They're reporting that they're cutting months out of the design cycle. You don't need an abacus to figure out that cutting production time will add up," said Charles King, principal analyst with Hayward, Calif.-based Pund-IT Research.

The Council on Competitiveness -- a nonprofit public policy organization made up from CEOs, presidents of universities and heads of labor groups -- recently conducted a study with IT research firm IDC on commercial supercomputing. The council is hoping to remove the stumbling blocks for commercial adoption of HPC.

According to Suzy Tichenor, vice president of the Council on Competitiveness, the U.S. commercial sector needs HPC proficiency to shorten production cycles, gain insight into business processes and get products to market.

"Market forces alone aren't going to solve this problem. As a country we're going to have to make HPC more accessible and promote the use of it," Tichenor said.

Why now?

Supercomputers aren't hardware anymore. To quote Sun Microsystems' vision from a few years back: The network is the computer. A supercomputer is the glue that holds commodity servers together and allows them to work as one incredibly powerful machine.

Bo Ewald, CEO of Bluffdale, Utah-based HPC provider Linux Networx, has watched the transformation of supercomputers since the 1970s. Ewald began his career as a research scientist at the supercomputing facility at Los Alamos National Laboratory before becoming CEO of supercomputing giant Cray Inc.

"In the early days of supercomputing you had one processor and you made it go as fast as it could," Ewald said. "Supercomputing has moved from a specialized proprietary design to hardware based on off-the-shelf components."

For example, Tim Burcham, senior director of informatics at South San Francisco-based diaDexus, Inc., sequences genes with a Linux Networx cluster that looks and acts like any other rack in his data center.

"We don't have a huge cluster," Burcham said. "It just sits in a rack next to normal file servers. It's got [standard] UPS, cooling and fire suppression."

The only difference between diaDexus' supercomputer and the rest of its racks is that the cluster requires its own special configuration and network. But not every data center pro has the skill sets to maintain that configuration.

What is holding HPC back?

With the hardware becoming commoditized, why aren't more companies using HPC? According to a report from IDC and the Council on Competitiveness, minimal support from software vendors and a lack of training are the culprits.

Independent software vendors (ISVs) find it hard to make money in supercomputing software for a number of reasons.

  • The customer base is small, therefore there's little return on investment.
  • Writing code to take advantage of the new clusters means scrapping the code written for the scale up models of the past, an expensive proposition.
  • The success of open source programs in this market is disrupting traditional business models of how ISVs make money.
  • Clusters, multi-cores and virtualization are wreaking havoc on traditional pricing schemes.

    "I don't think that the ISV's in HPC will ever scale up to the size of a Microsoft or Oracle since there is not one "killer application" in HPC, but rather there are so many uniquely important applications," Ewald said.

    One solution would be a partnership to mitigate the risk of developing new software. According to the Council for Competitiveness, 83% of ISVs it surveyed said that they would partner with other code developers, universities and IT pros to work towards developing applications.

    But therein lies the problem. What IT pros are going to step up to the plate?

    "When we surveyed users, we found that there was a lack of talented people who really know how to use these systems," Tichenor said.

    Getting the talent

    If you've determined the business need, the first step to implementing HPC in your data center is determining if your organization has the culture to support it.

    According to Tony Iams, analyst with Port Chester N.Y.-based Ideas International, companies need a culture of self-development and support. If you have the skills and the culture, then potentially you could establish a competitive advantage over other companies.

    "It means IT people are going to have to do more integration, deploying and optimizing these systems," Iams said. "Data center staff will need to invest in their own skill sets, learn more about open source. Development and integration skills will be in high demand."

    Burcham has a doctorate in chemistry and biophysics with an advanced degree in computer science, and has spent most of his career in fields that bridge the two disciplines. But he said HPC is open to anyone with a tech background who is able to learn and specialize.

    Experts made the following suggestions for improving your HPC skills:

  • See who is working with your competitors and then engage with those vendors. See what sorts of applications are available.
  • Get familiar with networking batch queuing software such as OpenPBS or PBS Pro.
  • Invest in education on open source, grid architectures and clusters.
  • Research networking hardware and router configurations to get the highest performance possible out of your backbone.
  • Investigate custom analysis application building tools from companies like Summit New Jersey-based Markov Processes International.

    "We used off the shelf components and built the glue," Burcham said. "It wasn't hard, but it took a lot of thought. I have wished I had an [in-house] resource to draw on that was an expert in HPC. If you had someone familiar with OpenPBS or grid, they could have drawn our specifications up in an hour. A normal IT person could learn those skills in a short amount of time."

    For more information:

    Enabling next generation clusters for HPC and commercial data centers

    Rethinking grid computing as a mainstream solution

    The systems are becoming affordable, but you can't buy them off the shelf. People with in-house skills are doing it or they're partnering with someone who can. And with investment, companies will be able to harness this computing phenomenon.

    "Supercomputing allows you to model things at the level of the molecule," Ewald said. "We have a computational time machine. You can slow things down that happen too fast to possibly understand or speed up events to simulate would take hundreds of thousands of years."

    Let us know what you think about the story; e-mail: Matt Stansberry, News Editor

 

Dig deeper on Data Center Jobs and Staffing and Professional Development

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchWindowsServer

SearchEnterpriseLinux

SearchServerVirtualization

SearchCloudComputing

Close