Are you really getting more flexibility, higher density and easier management on a converged or hyper-converged...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
box than you would with blade server architecture?
Blade servers were in many ways the original converged IT architecture. Depending on vendor, models offer server blades only, servers and redundant switches to connect the units together, and packages with disk bays as well. All of these start with a high price for the base chassis, which could be in the $20,000 range or more. The idea of a fixed chassis also limits the cluster size to typically around 10 server blades.
Commercial-off-the-shelf (COTS) components evolve at the pace of the technology. Blade servers tend to evolve at a much slower pace because any change affects many of the elements, therefore requiring longer design and testing cycles. In this respect, the blade server architecture more resembles mainframes than x86 servers.
One common thought is that a preconfigured and pretested blade assembly is much less hassle during installation than rolling your own IT infrastructure, and that alone makes them interesting to enterprise IT shops. This matches my own experience, but today it is possible to get racks or even containers of gear delivered installed, wired, tested and ready to go, so this benefit of blade server architecture is rather diluted.
The converged approach can break free from the all-proprietary nature and prices of the blade chassis. Servers with conventional motherboard architectures are possible, reducing cost. The IT shop can opt for standard switches too. Perhaps the two major advantages of the converged approach over blades are in storage, where a lot more drives are accommodated in a converged node, and in configuration of add-on features like graphical processing units (GPUs).
Commodity components in converged architectures
Converged and hyper-converged infrastructures are clusters of compute, networks and storage. In part, the idea stems from the fact that today's storage appliances look very much like servers. They have the same drive count -- 10 or 12 -- the same COTS engine and roughly the same form factor.
If they look and act the same, why differentiate the parts of the puzzle? Why not use the same elements for server and storage? The promise is cost-efficiency and flexible IT resources, but you must understand the hyper-converged architecture to disentangle the facts from the fiction.
While converged platforms improve production scale over x86 and storage boxes, that paradigm benefits the IT equipment vendor, not the data center user. Clusters also carry vendor lock-in points, such as the disk drive, DRAM DIMM and other components in use. Restrictions on buying from the cluster vendor can considerably increase total cost of ownership for the same architecture without convergence.
Long-term flexibility and growth of the cluster are important to enterprise IT shops. With many converged systems' configurations, the architecture is limited to the vendor's offered choices. Any right-sizing of a particular module is difficult. Adding an all-flash array to the build is a problem, for example. In another example, a big data analytics application needs in-memory operation, fast drives and networks, and GPUs to be efficient. A standard converged cluster isn't going to offer the right platform to complete that work.
This rigidity of nodes places restrictions on the way the cluster runs IT workloads. The problem stemming from any converged approach is that the data center will evolve into near-independent islands of computing, especially if the IT team brings in multiple converged and hyper-converged vendors to achieve the target capability for distinct workloads.
For converged clusters to be successful, vendors and IT buyers together need to overcome the lock-in and island problems. COTS gear, made by original design manufacturers such as Supermicro and Quanta, allows an IT shop to customize clusters and subsets of clusters for a specific purpose, without creating islands. Software-only vendors DataCore Software and Springpath are among the vendors pursuing this path. Software-based convergence enables case-by-case decisions by IT teams about hardware and platforms, including open source options like Ceph.
Changing buying policies also frees up the add-on alternatives, such as commodity disk drives rather than the converged vendor's proprietary drives, with features that can hike prices up tenfold. Companies such as Nutanix and Maxta offer software-only or software-plus-hardware converged products that open up hardware choices. In total, the open converged cluster that this creates will be more dynamic and flexible than a single, traditional vendor solution.
Keeping in mind the provisos about traditional vendor lock-ins stated above, converged solutions still deliver more bang for the buck than blade servers, while offering more flexibility.
Defining server choices in a software-defined data center
When density matters most
Top 10 reasons to go hyper-converged