Over the past few years, Intel and AMD have increased their processor performance and efficiency with dual- and quad-core processors. But since memory technology has lagged given issues like cost and regulation, these superfast processors end up spinning their wheels while the memory completes data requests, and the problem could actually get worse before it gets better, experts say.
"With the transition to multicore processors, the rate of improvement in microprocessor speed has far outpaced the rate of improvement in memory speed," said Morgan Littlewood, the VP of marketing and business development at Violin Memory Inc. "But the same doesn't go for memory modules … which will continue to be an issue as more and more cores are added to processors."
Memory undercutting performance
The independent testing and research organization Mindcraft conducted tests of a Web server comprised of hardware, operating system, server software, TCP/IP protocols, application software and Web site content running a simulated workload that shows how memory affects CPU performance.
- A 2 CPU Web server with 512 MB of base memory running Windows Server 2000 will encounter a 37% performance increase by increasing memory to 1 GB; will see 76% better performance by going to 2 GB; and a 90% better performance by going to 4 GB.
- A 2 CPU Web server running Sun Solaris with 1 GB of memory will encounter a 66% performance boost by moving to 2 GB and an 82% performance increase by moving to 4 GB.
- A 2 CPU Web server running Linux with 512 MB of memory will encounter a 53% performance increase by moving to 1 GB memory; a 102% performance increase with 2 GB of memory and a 125% performance increase with 4 GB of memory.
Memory detracting from energy efficiency
Memory can also dip in to servers energy-efficiency gains, and not all memory is created equal. CPUs are affected by the type of memory modules on which they depend. The distinction matters because the power efficiencies built into the processor might add up to null if the memory is inefficient, according to a comparison of Intel and AMD servers by Neal Nelson & Associates.
"By themselves, the Intel processor chips may use less power," the comparison indicated, "but all current Intel Xeon servers use of [fully buffered dual inline memory modules] (FB-DIMM)." These modules appear to consume more power than the DDR2 memory modules in AMD-based servers. In many cases, the result is that "an Opteron based server actually uses less total power than a Xeon-based server," Nelson determined.
The roadblocks: Price and architecture
But things aren't as simple as ramping up memory to keep pace with processor development, said Brett Williams, the senior segment marketing manager for DRAM at Boise, Idaho-based Micron Technology Inc. Server and processor vendors have been hindered in adopting denser, faster memory given the hurdle of cost; denser memory requires higher-cost materials and significant R&D.
"We have the ability to architect memory so that the CPU never has to wait, but there is an economic downfall," said Williams. "The performance gap could easily be resolved today, but with [cost] tradeoffs, and companies buying memory always have a price cap."
But price isn't the only roadblock. Other obstacles include server architecture and external requirements.
Server memory capacity depends on the server architecture and how many DIMM slots are in the server. Maximum memory is based on the highest-density DIMMs available. The most widely used memory density on the market now is 1 GB; double that of two years ago. The next-generation 2 GB memory on the market won't be considered mainstream until about 2009 because of cost, nor will 4 GB memory, said Mark Tekunoff, senior technology manager at Fountain Valley, Calif.-based Kingston Technology Company Inc.
Tekunoff said the memory-processor performance gap also stems in part from the number of memory controller channels available in servers. Memory controllers manage the data going to and from the memory and can therefore affect performance. Most servers have dual-channel memory controller architectures. For now, systems that use registered error-correcting code memory are dual channel. Chipset vendors have some three channel memory controller products in the works, and FB-DIMM systems for Intel processors now have four channel memory controllers available, he said. .
Another speed bump is the Joint Electronic Device Engineering Council (JEDEC), which develops standards for the solid-state industry and sets the pace for memory manufacturers, according to Micron's Lauer.
"There have been multiple times when we have tried to do something revolutionary instead of moving in increments, but weren't able to move forward because of the JEDEC," Lauer said.
The future of memory
Introduced in 2007, the latest memory incarnation is DDR3 SDRAM. Due out in the fourth quarter of 2008, Intel's Nehalem generation of processors will support only DDR3 memory, and AMD plans to build its new eight-core and 16-core architecture, code-named Sandtiger, with support for DDR3 memory in 2009, according to information on the vendors' websites. But until then, observers like Tekunoff counsel data center managers to use the fastest memory supported by the systems being built to avoid performance and efficiency bottlenecks.
Let us know what you think about the story; email Bridget Botelho, News Writer.
Also, check out our news blog at serverspecs.blogs.techtarget.com.