Keep your server sharp and upgrade memory techniques
A comprehensive collection of articles, videos and more, hand-picked by our editors
There's more to server memory than data rates. It's also important to understand memory ranks and channels.
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
When selecting server memory, it is almost always best to choose registered DIMM (RDIMM) rather than unbuffered DIMM (UDIMM) because RDIMM offers faster operation and higher capacities, and uses dual-rank rather than single-rank modules. Extremely large memory capacity should warrant an evaluation of LRDIMMs, which perform better than quad-rank RDIMMs.
Memory controllers can be sensitive to differences in DIMMs between channels, so it is often best to fill memory channels using DIMMs with the same capacity, rank configuration and speed. This will usually allow the very best memory subsystem performance. If you must mix DIMM sizes between channels, try to use an identical rank configuration and use the same mix of DIMMs in every channel -- consistency counts. One or two DIMMs in a channel will typically provide optimum transfer speeds, so avoid the use of three DIMMs in a channel unless total capacity (rather than performance) is your top requirement.
Also be sure that the server's processor will adequately support the memory transfer speed. For example, if you want to support a workload's memory demands with a transfer speed of 2,133 megatransfers per second (MT/s), it is critical that the processor and the system bus architecture be suited to support those memory goals. Otherwise you will underutilize the memory and waste valuable capital.
If you plan to use UDIMMs on a server, consider the performance impact of memory density. More ranks on each DIMM can increase memory density and overall capacity. Single- and dual-rank UDIMMs provide essentially equal performance in a channel. However, more ranks can cause additional electrical loading in the server's memory channel, and this can reduce memory performance. Systems that require large amounts of memory should generally employ RDIMMs or load reduced DIMMs. Buffered memory devices are far less sensitive to signal loading than unbuffered memory devices.
Be aware that large quantities of any installed memory type may lower the transfer rate for the memory and produce a noticeable difference in data throughput -- and workload performance. For example, one or two registered DIMMs per channel can operate up to 1,600 MT/s, while three RDIMMs per channel may throttle back to 1,333 MT/s. Workloads that require the very fastest memory performance may require IT professionals to deploy them on servers with lower overall memory per channel. Workloads that are less sensitive to memory performance may work better deployed on servers with large quantities of memory installed.
Although DDR3 memory is more energy efficient than previous memory standards, there are some options available to optimize energy efficiency. First, consider using low-voltage versions of memory if possible, which operate at 1.3 V direct current (known as DDR3L) rather than 1.5 V direct current for DDR3. However, low-voltage memory experiences greater electrical loading issues, so low-voltage DIMMs are usually only available in single- and dual-rank DRAM chip configurations.
In addition, consider servers that take advantage of DDR3 memory power-saving modes, such as Clock Enable (CKE) power down and Self-Refresh. CKE power-down mode looks for pending memory operations; if there are none pending, the system puts the DIMM into a low-power state and only refreshes the memory. The Self-Refresh mode puts the DIMM into even lower power state when the CPU enters a C6 power down state, and the DIMM performs its own refresh cycles. Remember that both of these settings are configurable in the server's BIOS, but enable these features with caution because the time needed to bring the memory into and out of a power-saving mode can actually reduce memory performance.