Brian Jackson - Fotolia

Tip

Compare benefits of a blade server architecture vs. hyper-convergence

To determine if a series of blade servers is the right call for more condensed compute in your data center, evaluate its benefits compared to hyper-convergence.

Modern data centers aim to simplify hardware platforms while increasing operational agility. Until recently, IT teams achieved this by using a blade server architecture, but the pressure to include storage within those systems gave rise to hyper-converged infrastructure.

But since both hyper-converged infrastructure and a series of blade servers offer more condensed compute, it can be difficult to choose between the two. Before choosing, it's important to look at the attributes of both systems to understand the benefits and trade-offs.

Comparing a blade server architecture to hyper-converged systems

In less than 20 years, blade servers have evolved to include switching and storage devices. Typically, a blade server configuration consists of a carrier unit with a set of side-by-side compute blades in a hot swap backplane, with redundant power supplies and a pair of switch modules. While drive blades are available, most blade servers either have no bays or provide a set of drive bays that all of the blades share. CPU wattage is limited to the cooler, lower-performance CPUs due to tight packaging. Power and space also limit Dynamic Random Access Memory capacity.

Hyper systems derive from traditional rack servers and tend to adhere to the 1U modularity of those units. These systems combine storage platforms and servers to the point where they become essentially indistinguishable. While traditional storage required large arrays of hard drives to achieve adequate performance, today's SSD-based appliances are typically just 8 to 12 SSDs, including a commercial-off-the-shelf controller that is essentially identical to a server motherboard.

The key distinguisher between hyper-converged infrastructure and a blade server architecture is that in hyper-converged systems, the storage is networked and then pooled to create a huge virtual SAN. New innovations such as software-defined infrastructure take this further, to the point that the storage pool and the networks connecting the appliances are virtualized and controlled automatically by orchestration software. This allows tenants of an HCI-based cloud to add and subtract to their configurations using scripts and policies, without central IT intervention.

How to create hyper-converged infrastructure with blade servers

Since the main differentiation between blade servers and hyper-converged infrastructure is software, is it reasonable to believe that a blade server architecture can be used as hyper-converged infrastructure? After all, they have all the elements of storage, networks and compute in a compact package.

There is no real technical impediment to creating hyper-converged infrastructure using blade servers. But to determine which option is best to create a hybrid cloud, for example, look at other criteria. It's important to examine how current each product is technically, since server, network and storage technologies evolve quickly. Configuration flexibility goes hand in hand with that, since no data center will complete a full hybrid cloud build-out in one shot. A system that embraces change and updates is necessary.

Configuration strongly affects storage requirements since a typical server will need at least two local SSDs to operate. The sweet spot is likely to be more drives, which allows for data redundancy and network sharing. Hyper-converged systems can handle this, but current blade servers typically have a single drive per server, or sometimes fewer, and rely on just a bunch of disks connected via Serial-Attached SCSI (SAS) to extend capacity and drive count. That doesn't fly with really fast SSDs, each of which can saturate a SAS link, since there are typically just a few SAS ports on each blade chassis.

To support the hyper-converged approach, add leading-edge technologies, such as Remote Direct Memory Access Ethernet adapters and Non-Volatile Dual In-line Memory Modules (NVDIMMs). Blade servers tend to be closed systems with low height limitations on Dual In-line Memory Modules.

The cost factors

The cost issue comes down to the proprietary nature of the blade chassis and canisters. These are pricy items and remember that vendor lock-in is strong. The proprietary blades preclude direct competition and also slow the adoption of new versions of drives and network interface cards. As innovation moves quickly, this becomes more of an issue.

Another cost factor is that there are just a few blade vendors left, while hyper-converged systems are coming from all of the systems and storage vendors, including the original design manufacturers that deliver to AWS, Google and Azure. This level of competition will drive hyper-converged infrastructure prices down compared to blades.

In the future of this debate, expect server motherboards to shrink in size with the arrival of Gen-Z technology late next year. SSDs may do the same, migrating to an M2 form-factor or NVDIMM design. This may lead us round the loop and back to a more blade-like system, since the hard drives are much more compact. Because there are other scenarios that might play out, it's important to be flexible.

Next Steps

Blade server adoption continues in IT

Is hyper-converged infrastructure suffering from vendor lock-in?

Hyper-converged market moving forward with vendor competition

Dig Deeper on Data center hardware and strategy

SearchWindowsServer
Cloud Computing
Storage
Sustainability
and ESG
Close