BACKGROUND IMAGE: iSTOCK/GETTY IMAGES
Servers don't last forever, and they shouldn't. A server refresh should add performance, reliability, efficiency and important new features that are currently missing from your data center.
Used servers become increasingly difficult to maintain and support as parts become scarce and service contracts grow prohibitively expensive. New server technologies increase efficiency, computing capacity and resilience with each model. The best time to refresh hardware, however, is different for every business. And used servers can bring surprising value.
The anti-aging effect of virtualization
Although it won't directly impact a server's service life, virtualization can extend refresh cycles by making the server's computing resources available to more workloads, and allowing workloads to be migrated on-demand between server hardware platforms. And unlike physical environments, virtualized data centers don't necessarily need to upgrade hardware to support new or updated software.
Virtualization provides a variety of options to protect workloads. Snapshots can capture the precise state of each virtual machine to disk, allowing fast restoration and restarts.
VM clustering technology is also evolving, allowing multiple copies of the same VM to remain synchronized across physical systems. When the main iteration fails, the redundant copy steps in without disruption.
This combination of factors means that businesses don't need to replace servers every two or three years. If a server fails, IT administrators can simply restart the affected workloads on other available systems. Today's virtualized servers often see five years of service or more.
This general rule doesn't account for each particular server's role nor the level of resilience afforded to the workloads it hosts. Most organizations still upgrade mission-critical servers on a regular basis. As servers age out, they can handle less important and less demanding workloads. Virtualization and resiliency features mean that any faults that occur can often be addressed effectively through workload balancing and hot swapping.
Keeping old servers going
Service and maintenance contracts are an important, yet costly, safety net for data center servers, making vendor parts and labor available when something goes wrong. The cost only increases as older parts become scarce and vendors support newer system models.
For an organization that requires server maintenance protection for business continuity, the cost of support contracts can be a definitive factor in a refresh decision; after a few years, the total cost of support will exceed the cost of a new system. In this case, it makes sense to spend the money for a newer system with a contracted period of support included.
Virtualization's decoupling of workloads and their underlying hardware has reduced dependence on support contracts. Features like snapshots, live migration and workload clustering make workload recovery significantly faster and more convenient across almost any available system.
New system acquisition choices should be driven by business value -- workload performance, energy efficiency -- rather than simply the affordability of service contracts. Investigate aftermarket IT service providers. Be sure they specialize in supporting your specific server models if you need extended maintenance periods, or if a vendor no longer supports machines that your organization still needs.
What considerations are the most compelling or beneficial in a server refresh evaluation?
Not every refresh requires a new server. Some features, such as graphics processing units (GPUs) and faster network interfaces, can be added via expansion devices. But it's important to weigh the cost of upgrades versus the cost of new system acquisitions. For example, the money for an enterprise-class GPU card added to an existing server might be better spent toward a new server with onboard, integrated graphics capabilities. In contrast, a network port expansion card can be added to a server for very little cost.
When planning a hardware refresh, look for computing resources and new server features that affect the server's ability to handle more workloads and service them faster.
Better performance results from faster memory and next-generation processors, along with servers containing more processor cores and larger memory capacity. Investigate new memory types, processor instruction set enhancements (such as the introduction of Intel-VT for hypervisor support), GPUs for servers, faster network ports and so on. For example, a server slated to support scientific computing, visualization or virtual desktops can perform better with the addition of GPUs.
Better reliability results from features such as card- and-component-level hot swapping, memory sparing, memory mirroring, advanced error correction, data bus error correction code and other self-correcting capabilities within the processor cores.
New servers use less energy and run cooler than current hardware iterations. For example, servers with C6 processor power states show less energy use than processor C3 or C1 states (C0 is normal operation with no savings). Better energy efficiency lowers data center operating costs for power and cooling while also lowering dangerous thermal stress on system components.
Eventually, every server needs replacing, but don't assume that older servers belong in the trash; use them for a wide range of secondary functions.