Some x86 server applications perform far better when running on multicore processors, but most applications have...
not been parallelized – whereby multiple CPUs work simultaneously to execute a program -- and thus don't see the full benefit of the latest chips.
To that end, Stanford University and some of the largest players in the computing industry recently announced the creation of the Pervasive Parallelism Lab (PPL) to devise a way for software developers to easily parallelize applications for multicore processing. With support from Sun Microsystems Inc., Advanced Micro Devices Inc. (AMD), NVIDIA, IBM, Hewlett-Packard Co. and Intel Corp, Stanford computer scientists and electrical engineers will conduct the research and development.
With a budget of $6 million over the next three years, the PPL will research and develop a top-to-bottom parallel computing system, from hardware to user-friendly programming languages, that enable developers to exploit parallelism automatically.
Adding to the effort, a few months ago the University of California at Berkeley and the University of Illinois at Urbana-Champaign both received grants from Microsoft and Intel to address parallelism. Over the next five years, Intel and Microsoft expect to invest a combined $20 million in the two university centers, with each center receiving half.Multithreading for multiple cores
Until a couple of years ago, computers with multicore processors were too expensive for all but supercomputers, where parallel processing (or multithreading) is the norm. Because the scope was so limited, few software programmers learned how to design software that uses parallelism to exploit multiple cores, according to Stanford's PPL research director, Kunle Olukotun a professor of electrical engineering and computer science.
Prior to the advent of multicore processors, improving application performance was simply a matter of upgrading to the latest processor. Chip manufacturers increased single-core processor speeds from 1 GHz to 2 GHz to 3 GHz, and software developers didn't have to change a thing. But as single-core processors were increasingly thrown into work, heat and power increased and overall performance hit a plateau. In turn processor manufacturers began designing wares with multiple cores, said Olukotun.
"As long as we've used single-core processors, software developers have never had to change anything. In all spaces, from servers to desktops, that has come to an end," Olukotun said.
To a certain extent, many applications already take advantage of today's dual- and quad-core processors. Today's server operating systems, for instance, are generally more responsive because they are designed to run multiple tasks independently on separate cores, Olukotun said. Similarly, many applications such as virtualization, Java, and expansive databases have also been parallelized to perform better on multicore processors.
Of course, there are some applications that don't need to leverage multiple processing cores, and there is no need to parallelize those, according to Joe Clabby at Yarmouth, Maine-based Clabby Analytics. "It all depends on the application itself," Clabby said. "What good would it do to run Microsoft Word across multiple cores? It performs just fine as a single-threaded application."
But a large swath of applications doesn't fall into these categories, and these applications have yet to be optimized for multicore processors. With its research, Oluktun's team has targeted precisely these applications for research.Making parallelism pervasive
The notion of parallelism is that if you write a program that can be divided into workloads, each workload can be executed independently on separate cores and perform better, Olukotun said.
That might sound simple enough, but getting separate cores to work in harmony without overlapping tasks is the central challenge of parallelism, he said. When it comes to application parallelization, memory is also challenged. "It is a big issue," Olukotun said. "Both jobs running in parallel want to access the same memory, and sharing it correctly is where it gets tricky."
"We are working with application developers to provide solutions for their applications. After we figure out how to parallelize specific applications, we will work on doing it in a more general context," he said.
Olukotun acknowledged that it's more difficult to divide some workloads than others, however. "There will be certain areas where it is tougher to parallelize: in programs where there is no easy way to separate the independent tasks, which is common in scripting languages like Ruby and Python. People are now looking at how to parallelize these types of applications," Olukotun said.
Olukotun and his team hope to create a way for developers to parallelize programs easily. Game programmers who already understand graphics rendering and physics would be able to implement their algorithms in accessible "domain-specific" languages. At deeper, more fundamental levels of software, the system would do all the work for them to optimize their code for parallel processing, Olukotun said.
"My hope is that our efforts will pave the way for programmers to create software for applications such as artificial intelligence and robotics, business data analysis, virtual worlds and gaming," Olukotun said. "We have a long history of working on parallel processing, and we always thought that the future would be power processing, but now the challenge is, how do we develop apps to take advantage of [multicore processors]."
Olukotun, who headed the Stanford Hydra research project leading to the development of the multithreading technology that is now Sun's Niagara processor, said he hopes to see parallelism become pervasive within the next decade.
"It is really late in the game, because we should have started coming out with this research years ago. But no one thought processors would go to multicore so quickly," Olukotun said.