"I wake up every day and can't believe what is happening in hardware design. For someone in this industry, it is a very exciting time," Patterson said. "We are in a parallel revolution, ready or not, and it is the end of the way we built microprocessors for the past 40 years."Serial computing hits a brick wall
Parallel computing refers to the practice of processing of program instructions by dividing them among multiple processors with the objective of running a program in less time. With the advent of multicore chips, parallel processing can be performed on a single chip, across multiple cores.
The hardware community is now relatively unanimous in the idea that the computing industry has to switch to parallel computing, he said, pointing out that Intel Corp. already has its five-year roadmap planned with parallel everything.
Patterson said that serial computing era has hit a wall in terms of power and memory. "There really was no breakthrough. It was really just a retreat from what we were doing, because power issues gave us no choice," he said.
The memory wall can be blamed on excessive demand; Patterson said demand for memory has doubled every 18 months or so, with memory manufacturers scrambling to catch up.
Because the CPU industry has moved from single-core processing, programmers that need more performance have to write programs that can take advantage of multiple cores through parallelism, Patterson said.
To push this effort forward, Berkeley researchers have met since February 2005 to discuss parallelism and have tried to learn from successes in high-performance computing and parallel embedded computing.
At the computing laboratory, researchers have focused on applications that should be parallelized; there is no need to run Microsoft Word on 100 cores, for instance, but other applications, such as gaming, would benefit, Patterson said.
But if parallel programs are written and executed properly, power issues and performance bottlenecks can be alleviated.
Trouble ahead for parallel computing?
Andrew S. Tanenbaum, a computer scientist who received the Usenix Lifetime Achievement Award at the conference June 26, said writing parallel applications is a major undertaking that can create more, not less, problematic software.
Even without parallelism, software crashes have become the norm, and Reset buttons get pushed a lot in data centers today, Tanenbaum said. "If your car had major failures two or three times a month, you would find that unacceptable, but we accept it with our software," he said. "Sequential programming is really hard, and parallel programming is a step beyond that. I have a great fear that we will have all of these cores, and our software programs will be even worse."
In order for parallelism to succeed, it has to result in better productivity, efficiency, and accuracy, Patterson said. Unfortunately, most programmers aren't ready to produce proper parallel programs.
Companies like Microsoft and Intel, which helped fund and launch the parallel computing lab at Berkeley in March 2008, as well as AMD, IBM, Sun Microsystems, HP and others have thrown millions of dollars behind efforts to create successful parallel software programs and educate the next generation of programmers on parallel computing.