All but the smallest IT departments can achieve HCI-like simplicity with modern storage and servers.
Hyper-converged infrastructure (HCI) uses clustered local storage to eliminate storage area networks, but its true value comes from the way it simplifies operational tasks. HCI eliminates disk groups, RAID configurations and Fibre Channel networks, which makes it easier to go from hardware on the loading dock to a virtualization platform.
IT administrators can achieve some of the benefits of HCI using a traditional hardware setup with wizards and automation.
Hyper-convergence vs. traditional setups
The most common fallacy I see in IT marketing is comparing a new product with the technology it replaces.
For example, buyers shouldn't assume that an HCI cluster with 16 nodes in 8U of rack space completely replaces a legacy server and hard disk implementation that takes up three full racks. A non-HCI setup might use a 2U all-flash array (AFA) and a small blade enclosure to replace the same three racks. The real performance comparison is not three racks to 8U, but rather an 8U HCI to 12U AFA and blades.
Unequal comparisons also occur when companies look at the challenges of building a Fibre Channel network, a hard disk-based array and Gigabit Ethernet versus HCI with 10 GB Ethernet; this comparison is neither fair nor accurate. Buyers should evaluate a new blade enclosure and AFAs against HCI, both with 10 GbE capabilities.
Automating a hyper-converged architecture deployment
Hyper-converged systems simplify three primary elements: initial hardware configuration, hypervisor deployment and a software-defined storage (SDS) implementation. This is why admins might think it is easier to use hyper-convergence vs. traditional infrastructures; a typical HCI product completes these tasks with approximately one hour of effort by a moderately skilled data center engineer. The total time to finish can be longer, but the processes are primarily automated once the setup wizard gathers the right information.
But all non-hyper-converged systems are complicated to set up; a hypervisor with shared storage is usually ready for virtual machines (VMs) by lunchtime. All the major server vendors have hardware configuration tools that automate the initial hardware configurations. Hypervisor vendors also have installation automation tools, so hypervisor deployment and redeployment are automatic.
AFA architectures remove the need for RAID and integrate with hypervisor management to create VM storage. They usually add a data store using a three-step wizard that takes no more than five minutes to complete. This deployment type requires an existing infrastructure and someone who can maintain the automation.
Typically, the infrastructure is a deployment server that runs a preboot execution environment service, which might be the IT utility server or a domain controller. Automation maintenance often copies standard scripts and adds a few site-specific details, such as IP addresses.
For IT departments that require multiple separate clusters, it is easy to achieve a simplified installation without hyper-converged systems. These companies have the existing infrastructure and expertise on staff to maintain the automation scripts. For customers that only need a single cluster that needs to bootstrap itself, it is going to be more cost-effective to buy an HCI product that enables deployment without an existing infrastructure.
The discussion about the ease of a data center deployment is a bit of a smoke screen. The real work of IT operations happens over the life of the system, not just at deployment, which only happens once every few years. Updating things such as the basic input/output system, firmware, hypervisors and SDS programs happens several times a year.
Simple deployments are useful and can help boost business agility, but the real cost savings of hyper-convergence vs. traditional servers come from systems that are easy to operate over their entire lifetime.