News Stay informed about the latest enterprise technology news and product updates.

The future of multi-core processors

Marc Tremblay, Sun Microsystems fellow, vice president and chief architect, made news last month when the company announced that he had been awarded his 100th technology patent since joining Sun in 1991. Thirty-six of Tremblay's patents have been related to multi-core processors -- an integrated circuit (IC) to which two or more processors have been attached for enhanced performance, reduced power consumption and more efficient simultaneous processing of multiple tasks -- and Multithreading. According to Sun, the latest patents for multi-core processor technology have furthered the "throughput computing" model that is challenging the boundaries of Moore's law, the empirical observation that, at the present rate of technological development, the complexity of a chip, with respect to minimum component cost, will double every two years. recently interviewed Tremblay to discuss the benefits of multi-core technology, the challenges he and others faced along the way, and what the future holds for "the chip with two brains."

When did the idea of multi-core processing gain steam among Sun executives as a potential game-changing technology?
At Sun, we were first in 1995 when we did the Magic processor, which was for multimedia and Java applications, and we shipped in 2000. At the time you had to sell the idea to the board of directors, [but] for that it was easy because you didn't have the macho effect. We weren't after 3.8 GHz; we were after delivering something more useful. On the SPARC side it took longer, and even here [at Sun] it was somewhat controversial.

Marc Tremblay
Marc Tremblay
What were the major hurdles in getting multi-core processors into the mainstream market?
So far there's been some misunderstanding where people think they need multithreaded applications to take advantage of multi-core. That's not correct. It is basically an SMP [symmetric multiprocessing] on a chip with much more scalability. In the past you had to spilt processes in two parts and then have those two processors communicate with each other. Now, by putting two processors on the same piece of silicon, it's like having two people in the same office. You can lean over the cubicle and say 'You take the odd ones, I'll take the even ones.' How much of the push toward multi-core chips is because of its performance capabilities, and how much is because of its ability to reduce power consumption?
What people will look at is performance per watt. One is a numerator, one is a denominator. On the server side we've seen a tremendous increase in speed on commercial applications. On the watt side, we're able to do that with a fraction of the power used in the past. It's really the combination that's the most positive aspect. With chip development on the verge of exponential growth, what does it mean for Moore's Law?
Moore's Law is alive and well. The number keeps doubling every two years or so. The main difference is how where using silicon real estate. As opposed to trying to deliver the highest clock rate possible, now we're trying to fit in as many cores and threads as possible. It's a very different usage of silicon real estate. Where do you see the multi-core chips fitting into the commercial landscape in the future?
It's going be 100% of the market for laptops, desktops and servers. The only distinction is that right now, the industry is scrambling to multi-core, and everyone is using their existing core and putting [two chips] on a single piece of silicon. That's the easiest way to get there, but not the best way. The best way is to start from scratch and to architect a new core that is optimized for multithreading and is small enough, and low power enough, so you put multiple ones on one chip.

Let us know what you think about the story; e-mail: Luke Meredith, News Writer

Dig Deeper on Server hardware strategy