PNNL seeks supercomputer to battle radioactive plume

Pacific Northwest National Labs is replacing a four-year-old supercomputer from HP. Its replacement will be used to tackle the most important challenges facing the planet: environmental remediation and sustainable energy.

This Content Component encountered an error
The Department of Energy's (DOE) Pacific Northwest National Laboratory (PNNL) in Richland, Wash., is in the market for a new supercomputer. The lab is replacing the existing system in its Environmental Molecular Sciences Laboratory, a four-year-old, 11.8-teraflop supercomputer from Hewlett-Packard Co. (HP) PNNL has not determined the vendor for the new supercomputer, but expects it will be five to 10 times faster than the existing model.

The new supercomputer is going to have to be fast to keep up with a spreading plume of radioactive toxins leaching toward the Columbia River. The DOE has tasked the lab (operated by Columbus, Ohio-based Battelle, a private contractor that runs some of the DOE's biggest labs), to neutralize radioactive liquid waste left over from the Manhattan Project, which is sitting in leaky tanks in Hanford, Wash.

More on supercomputing:
PNNL quantifies energy savings of liquid cooling

Sun servers study Sun's energy at Princeton lab

While the DOE is working on brick and mortar solutions to contain the waste, scientists at PNNL are using supercomputers to bioengineer bacteria that would make the leaked radioactive materials insoluble, stopping the plume from traveling through the ground water.

Aging supercomputer at PNNL

In 2003, PNNL purchased its existing 120-rack supercomputer from HP for around $22 million. The system is made up of 1,000 dual Itanium HP-RX 2600 servers. The servers come in two configurations: fat node and thin node, where fat nodes have more disk. The grid computing-based system runs a Linux variant provided by HP.

The servers are networked together with a high-performance fabric from U.K.-based Quadrics Ltd. According to Kevin Regimbal, manager of High Performance Computing and Network Services at PNNL, the fabric allows the processes running on the individual computer nodes to synchronize data so that they can divide the problem into small pieces and keep it in lockstep as they're working their way through the problem.

"It's a purpose built network -- high bandwidth and low latency," Regimbal said. "It has two microseconds of latency. That's two or three orders of magnitude faster than an IP Ethernet network."

But four years later, the system is starting to show its age. The data center was designed using computational fluid dynamics modeling, but it still has hot spots that periodically cause problems. The system is sensitive to small changes and routine infrastructure maintenance can overheat the servers.

"When servers start to overheat, they throttle themselves down, reduce their clock rate," Regimbal said. "That is very bad for high-performance computing because a job is only going to run as fast as its slowest node."

Also, the servers themselves are becoming harder to maintain. "There are 7,000 disk drives and they start failing faster and faster as the computer gets older," Regimbal said. "The cost of operating the computer becomes more expensive than replacing it. As part of the replacement, we'll end up with more, faster processors."

PNNL is not in the business of supercomputer design, per se. The DOE puts out a request for proposal (RFP) for the entire project, and it's up to the vendor to partner with other suppliers to deliver a whole package that meets PNNL's performance goals.

Every five years, PNNL rounds up a committee of involved scientists to outline what kinds of projects they plan to pursue. "We provide vendors as much info as we can about the science problems we are solving, and let them choose the best technologies to help us solve the problem," Regimbal said.

According to Regimbal, the competitive field is wide open right now, but he said the lab will work with a primary integrator such, as IBM or HP. Proposals were due by March 2, 2007, and PNNL expects to award a contract this fall.

Technical specifications for the new machine include:

  • A minimum aggregate 64-bit IEEE floating-point speed of 50 teraflops
  • Over 20 terabytes (TB) of shared system-wide storage able to serve data at the rate of 1 Gbps.
  • The system must support asynchronous I/O capabilities.
  • All nodes must support Kerberos5 authentication.
  • Compilers include: Fortran77, Fortran90 and Fortran95, C and C++ with interlanguage interoperability and support for 64-bit integer, floating point and pointer data.
The whole shebang has to fit within 4,092-square feet, including service clearance, and can't draw more than 1,200kW at 0.8 power factor. Also, the floor load cannot exceed 350 lbs. per square foot, and cooling requirements cannot exceed 550 tons of air-conditioning. PNNL is actually hoping vendors can shave 20% from its existing power, cooling and floor space requirements.

Supercomputers support superscience

Researchers at PNNL are using supercomputers to tackle two of the most important challenges facing the planet: environmental remediation and sustainable energy.

The lab sits on top of one of the world's biggest environmental nightmares. According to an Associated Press (AP) report from June, 2006, 2,300 people have sued over health problems that they believe are caused by exposure to contaminants at Hanford. The DOE's efforts to construct safer facilities to contain the waste have been notoriously hamstrung by mistakes from project leader, Bechtel Corp.

But scientists at PNNL have ambitious projects underway to stop the spread of contaminants. Priority No. 1 is to neutralize the radioactive waste that is leaching through the ground water around Hanford. In order to do that, Tjerk Straatsma, technical group leader at PNNL's WR Wiley Environmental Molecular Sciences Laboratory, is working with partners at the Shewanella Federation to bioengineer a bacteria that can defuse the contaminants.

Straatsma's work focuses on a specific type of bacterial microbe that can break down toxic materials, such as uranium and chromate, through its respiratory cycle. A bacterium called Shewanella oneidensis can remove electrons from contaminants -- rendering them insoluble and stopping the plume from spreading.

Clean up crews could inject contaminated areas with microbes that would reduce toxins to an insoluble form, allowing the material to spend the rest of its half-life safely underground, rather than migrating downstream on the Pacific Northwest's largest river system.

In another project, PNNL is studying how to efficiently transform biomass into ethanol to replace oil. As it stands, the process of turning biomass (corn for example) from cellulose, to sugars, to a fermented and distilled product is costly in both time and energy, according to Straatsma.

One of the reasons the production of biofuels takes multiple steps is "product inhibition." As enzymes break down biomass to create a desired product, they also create a byproduct that tends to stick around and gum up the works for the next step in the process. The more byproducts you have, the more chance it will inhibit the action of the enzymes, forcing scientists to separate and restart the enzyme reactions. Straatsma is looking for new ways to address this by understanding how the processes work and developing new models based on that information.

Straatsma and others are looking for a "one pot" solution that can do all of these processes in one shot. Researchers are searching for the appropriate microbes that can do all of these things and also are willing to live with each other.

"What we're really after is making biology predictive. Today, biology is a descriptive science," Straatsma said. "We're dealing with living organisms that have thousands upon thousands of molecules in a single cell that all interact with each other. These networks of reactions are very complex and that's why we have to do data intensive computing. We can only do that if we have the computational technologies that generate the massive amounts of raw data for us."

Let us know what you think about the article; e-mail: Matt Stansberry, Site Editor.

Dig deeper on Data center hosting, outsourcing and colocation

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchWindowsServer

SearchEnterpriseLinux

SearchServerVirtualization

SearchCloudComputing

Close