IT personnel always try to make the most of their computing resources, and memory compression --fitting more data into less space -- is an appealing concept for data center operators. Compression used to focus on disk space, which now relies on deduplication to reduce redundant data in the storage system. Today, virtualization drives the use of compression in memory space for improved performance and increased consolidation ratios. Let's consider several questions regarding memory compression.
Q. What is memory compression, and why is it so interesting to the IT industry?
Generally speaking, all forms of data compression share the same goals: to reduce the amount of space required by removing redundant or duplicate data. Compression typically starts with processing the data through a mathematical algorithm, then decompressing by reversing that algorithm. Although compression has been around for decades, it has typically been avoided as a means of resource savings because of the processing overhead required to compress or decompress the data. Extra processing work reduces system performance for other tasks -- particularly for the application at work in the first place -- which is undesirable for a computing community that seeks to wring every last bit of performance from applications.
However, the advent of virtualization has renewed interest in compression, namely compressing memory space. Virtual machines (VMs) are basically files that reside in the server's memory, so it is often the server's available memory -- rather than the available CPU cycles -- that limit the number of VMs that can run on that server. To increase consolidation levels, you would either need to add more costly memory to the server, upgrade to servers with more memory or shrink the memory needs of each VM. Adding compression to memory can do exactly that and potentially improve performance at the same time.
Q. Where is memory compression performed on the server? Is it a hardware or software feature?
Memory compression can be implemented as a software or hardware feature. At the software level, virtualization platforms like VMware's vSphere provide transparent memory compression, which is primarily designed to reduce use of the system's swap file. For example, rather than swapping memory pages to a relatively slow disk swap file, ESXi will compress the memory page and move it to another area in memory. This is an order of magnitude faster than disk access and takes far less space than the uncompressed memory page would otherwise require. The page can then be recalled from memory and decompressed before being swapped back to the virtual machine.
Memory compression can also be handled by the operating system and supported by hardware, such as the Active Memory Expansion feature of AIX 7.1 on Power7 systems, which compresses in-memory data to increase the effective memory capacity of each logical partition (LPAR) on the system. This approach is more generic than VMware's approach, allowing complete on-the-fly compression of memory space on the Power7 platform.
Q. How much memory space savings can I expect with memory compression features? Is there a serious performance penalty for using memory compression?
Unfortunately, there is no single definitive answer for either of these questions. Let's start with the amount of compression. Compression works by removing redundant information from a given body of data, so the more redundancy in the data and the more aggressive the compression algorithm, the greater the level of compression that you can expect. Conversely, a body of data with no redundant characteristics may provide little -- if any -- compression. In addition, as the data changes over time and the amount of redundancy within that data changes, the amount of compression will also change.
For example, tools like VMware's transparent memory compression will use traditional page swapping until the memory page can be compressed to at least 50% of its uncompressed size. Other tools and compression platforms may allow you to vary the threshold of compression for best results.
Compression will certainly impose some amount of processing overhead on the server, but the actual amount of overhead will depend on factors like the available CPU cycles, the workload's sensitivity and the amount of compression -- the aggression of the algorithm. For example, IBM suggests that a workload that requires 3.75 CPUs may demand an additional 0.25 CPUs for active memory expansion on that workload, which is about 6% more processing cycles. Since memory is often the limiting factor for server consolidation, additional processor cycles are usually available for compression support.
The computing resources available on the server limits virtual machine consolidation, and memory is often the resource that is exhausted first. Memory compression is one emerging technology that can extend the effective amount of system memory and potentially improve performance. However, the benefits and processing overhead with memory compression are not certain, so it is important for IT administrators to perform extensive testing with the technology in a lab environment to gain experience and quantify the tradeoffs before deploying compression in a production environment.