FotolEdhar - Fotolia
For most IT shops, a server farm upgrade or replacement is inevitable. But that refresh process differs depending on whether the installed base is a blade server configuration or traditional rack servers. Here's a look at the key design differences between a blade server vs. rack server, as well as how those differences will impact refresh decisions in the data center.
Design differences: blade server vs. rack server
To understand refresh options, begin by looking at the server design and certification process. Blades are tightly configured blocks of hardware designed in concert to physically integrate with each other. All of the elements, such as power supplies, switch modules, motherboards and adapters, are proprietary to the specific blade family and vendors test them extensively before shipping.
There is a downside, however, to this tight certification and testing process. It's time-consuming, which contributes to longer design cycles for blade servers, and limits upgrade choices.
There is a strong vendor lock-in for blades. From warranty controls to nonstandard packaging and signatures on drives, adding commercial off-the-shelf (COTS) systems to a blade server environment is essentially a no-no. These COTS systems include CPUs, drives, memory and other components that can't be plugged into blade servers because of the warranties.
Another downside is that vendors may not keep any specific blade family alive for the 8 to 12 years that chassis, infrastructure and IT processes would mandate. In other words, an upgrade may not be an option. A simple example makes the case for this: blades usually have switched backplanes running at a specific link speed. But right now, Ethernet speeds double every 18 months, so blade servers might not be able to take advantage of faster Ethernet speeds for a few more years.
Rack servers, in general, are much easier to upgrade and fall into two categories. First, there are vendor-proprietary configurations, where rack servers include only certified parts. While this protects against the use of cowboy parts -- or parts from unsanctioned distributors -- the reality of today's market is that most COTS parts, if bought from distribution or trusted suppliers, would work as specified and are much cheaper than the proprietary, locked-in components.
The second category is the totally open rack server, typically bought from a more aggressively priced vendor or even assembled from parts. These servers can use inexpensive COTS parts, typically making an upgrade a strong possibility.
Other high-density options
There are several high-density packaging schemes in the market from vendors such as Dell and SuperMicro, ranging from 4-unit clusters with shared power supplies to cabinets holding 10 or 12 COTS motherboards side by side. These typically don't have integrated drives or networks and are much more open in regards to memory components. These packaging schemes are in between rack and blade servers. In 2001 -- the early history of blades -- we saw two types. One, which the large traditional companies call blade servers, had all the switches and drives built in. The other just had a set of servers, typically using standard COTS motherboards mounted in small packages.
Refresh decisions for a blade server vs. rack server
With both blade and rack servers, IT teams can boost workload performance by quadrupling available memory and keeping more of the workload in that memory. An in-memory database is the ultimate example, with boosts of as much as 100x in performance.
Growing the memory size usually puts a strain on the storage subsystem, since the more efficient system needs to feed more data. This is the time to commit to solid-state drives (SSDs) to replace old, slow hard drives. The much faster SSD will surprise you by wiping away a bottleneck that many admins don't recognize.
Perform these two upgrades together, and they may give the server farm a couple extra years of life. That's a good economic proposition, since the upgrade kit is normally much less than the cost of new servers.
Such an upgrade may not be economically feasible in a blade server, however, if the supplier uses the Gillette principle of razors and razorblades. The required components may not have reached market anyway. This may force the blade user to hold on to existing configurations for 2 or even 4 more years past the typical year hardware refresh point, yielding progressively less efficient servers relative to the rest of the market.
Network upgrades have a different impact than a server upgrade. First, organizations need a configuration-wide change to achieve the full benefit. Having even a few nodes at the old, slow network speed runs a risk of slowing the whole workflow.
This is not a huge issue for rack servers. It's easy to add a new network interface card when the memory and drives are upgraded. The fabric is usually the same, so the process involves swapping gear and plugging it in again. In the blade server, this may become a forklift effort. The fabric is often dated, and any switch functions need to be changed out.
Refresh ultimately involves replacing the servers, but, again, there are different processes for a blade server vs. rack server. Here, the question is what to do next. Blade servers need to have lifetimes of 8 to 12 years to be economically viable, and that isn't today's market trend. Useful working lives are 3 to 4 years for rack servers, at which point either an upgrade or replacement should occur.
SDDC debate over blade servers vs. rack servers
Purchasing guide for blade servers
Hyper-convergence threatens blade server architecture