zhu difeng - Fotolia
Hyper-converged infrastructure has become quite popular in the data center, but it is not a good fit for every use case.
The biggest disadvantage to hyper-converged infrastructure (HCI) is its inflexibility. Vendors preconfigure systems to support typical server, storage and network requirements, but not all workloads are created equal.
Some applications need large amounts of storage, while others take up a lot of network bandwidth. Additionally, hyper-converged infrastructure relies on virtualization, which enables multiple applications to share the same finite amount of resources.
As a result, IT professionals must pay close attention to capacity and their scalability needs when evaluating potential hyper-converged infrastructure use cases -- especially with big data applications and virtual desktop infrastructure (VDI).
The economics of HCI use cases
Worldwide hyper-converged infrastructure revenue grew 68% year over year during the third quarter of 2017, and it generated $1 billion in sales, according to IDC.
These systems are attractive because they integrate server, storage and networking hardware with virtualization software, creating a preconfigured, easy-to-deploy bundle. IT departments then simply drop the appliances onto their enterprise networks and load their applications onto them. Data center administrators spend little time configuring the devices, which has traditionally been a tedious and time-consuming process.
"HCI enables corporate IT to deploy systems in a cloud-like manner," said Stephanie Long, an analyst at Technology Business Research in Hampton, N.H.
The economics of certain hyper-converged infrastructure use cases can be unattractive, however, particularly in scenarios where workloads rely heavily on one specific component. Big data is one where hyper-converged infrastructure has not always been a good fit because these applications store oodles of data, and therefore need more storage than server processing power.
"Companies have found that dynamic workloads, like Hadoop and [SAP] HANA, do not function well on hyper-converged infrastructures," said Christian Perry, research manager at 451 Research.
VDI can cause similar issues because virtual desktops create high server CPU loads. Although VDI was one of the biggest hyper-converged infrastructure use cases in its early days, a survey by Technology Business Research found that VDI no longer places among the five most popular HCI workloads.
"Some companies have encountered scaling issues with their VDI deployments," said Stanley Stevens, a manager at the firm. "Rather than go with HCI, companies have been looking to [graphics processing units] to handle VDI workloads."
If an organization needs more processing power for its virtual desktops, it's easier to scale up with GPUs than to buy additional hyper-converged systems and to incur unnecessary storage and networking charges.
Hyper-converged infrastructure vendors have been working to make their systems more conducive to infrastructure fluctuations, trying to make it possible for organizations to purchase one needed element without adding other components.
Hyper-converged infrastructure, by its very nature, is a virtualized system. Virtualization enables organizations to maximize server capacity by running multiple applications on one server, but that can lead to problems with high-performance workloads, Perry said.
For instance, a company may want its e-commerce software to work without interruption. But if it runs on a virtual server without proper capacity , other applications may take resources away from it, causing it to slow down or crash. This is a potential pitfall for any virtualization deployment, not just those that are part of hyper-converged infrastructure use cases.
However, to avoid such problems, companies may choose to run certain applications on physical servers, which hyper-converged systems usually do not support.