Processors get all of the glamour, but it's often the memory that makes or breaks an enterprise data center server.
What specifications should I look at when evaluating the on-board memory in a server for my data center? How important is server memory voltage to performance?
Server memory voltages are falling, but the relationship between voltage and performance is a game of diminishing returns.
In early system designs, lowering the operating voltage improved performance by shrinking the voltage difference that occurred in a logic gate transition. This made the chip operate faster.
Modern standard voltage (SV) memory modules based on Double Data Rate 3 (DDR3) solid state memory operate at 1.5 volts. These memory modules complete 1,866 million transfers per second (MT/s). Low-voltage (LV) DDR3 memory modules operate at about 1.3 volts, which actually reduces memory performance to about 1,600 MT/s. Lower voltage now means the chip operates more slowly.
Server buyers might still want the lower memory voltage because LV memory reduces power and cooling needs when power conservation is allowed. The chips boost operation to 1.5 volts when peak processing is required.
Transfer speeds can fall more when the server configuration increases electrical loading on the memory channel, e.g., more dual in-line memory modules (DIMMs) per channel. This is indirectly related to device voltage. For example, two 1.35-volt DDR3 DIMMs operate well at 1,600 MT/s, but three LV DDR3 DIMMs in the same channel require a slower 1,066 MT/s transfer speed to accommodate the higher electrical loading imposed by the additional DIMM.
Check new server documentation for the relationship between DIMM voltages and channel loading.
See the next question on server memory: What's the difference in DIMMs?
This was first published in February 2014