When IT has one of its periodic technology spurts, hype surges along with it.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
From this, an abundance of buzzwords emerge -- and so began converged and hyper-converged platforms. In many ways, this is system integration taken to a new level, in instances where a commercial off-the-shelf (COTS) server would be good enough.
Converged systems comprise elements designed to work together, such as VCE's Vblock. These product categories reflect the idea that networking and storage functionality co-reside in the virtual machine space on servers. They are configured for specific use cases and different scales of task, removing many uncertainties in component selection and reducing installation time.
Hyper-converged platforms go a step further, adding the idea of standardized modules. Instead of increasing specifically disk or memory, scaling is achieved by adding more modules -- almost completely dispelling hardware and platform decisions. Nutanix and SimpliVity are examples of hyper-converged vendors.
The concept aligns with cloud usage principles, where users flexibly add or subtract identical virtual instances. The single platform unifies management into a single tool set.
Less planning, less specifying, less testing and simpler management are all good things about converged infrastructures, but there are downsides.
Loss of granularity, especially in hyper-converged platforms, can end up costing a lot more if the balance of tasks evolves. For example, IT shops might need to add whole appliances just to get more disk space for a storage-heavy application. Standardizing on the converged approach leaves little or no architectural flexibility.
Converged and hyper-converged platforms cause vendor lock-in. This could trap a user in a high-cost upgrade and support cycle, and may limit competitive bidding in a market where, generally, technology cost is halved per metric every two years.
To get a sense of this paradigm, look at blade servers, which are in some ways precursors to converged systems. Most blade models on the market offer a higher level of integration out of the box than rack servers can match. Vendor lock-in is absolute, and aftermarket elements -- such as support, spares and blade upgrades -- are proprietary and expensive. Converged infrastructure presents the same risks: first-year savings with questionable total cost of ownership (TCO) benefits.
Modularity in another form
Large cloud service providers (CSPs) chose another path to simplification, via the COTS server. CSPs recognized that data center technologies experience a four-fold increase in performance about every four years. Therefore, the hyperscale data centers decided infrastructure components should be refreshed on that timetable.
To reduce installation cycle cost and time, these major players conceived of the rack-modular and container-modular approaches to scale-out capacity. This modularity presents the same benefits of convergence without the vendor lock-in issues. The servers installed in modular systems are typically 1U units, and can be mixed configuration. Storage appliances can be fitted into the racks with servers to provide balanced resources for workloads.
This approach simplifies module configuration and specification, with assembly and testing carried out before the modules arrive at the data center. It allows a shorter delivery and installation cycle.
The major downside to the COTS server approach is that the buying power of major cloud companies hasn't trickled down to enterprise IT.
Google's game plan is to spec out a new server interactively with the supplier, buy and test a rack's worth, and then buy tens of thousands of servers, for example. Schedules are usually very compressed, and prices are hammered down.
This commodity, modular approach isn't the relationship server vendors want with enterprise IT. It's a rough market to play in. Dell and HP find it tough to sell at any decent margin, and EMC has basically given up.
Enterprises trying this route must be committed to most of the same ideas that Google and other hyperscale data centers use. They can leverage existing designs, and must be hardnosed and willing to change vendors whenever the price is too high.
Choosing an approach will become clearer as the original design manufacturers that build those huge volumes of servers for the CSPs break into the U.S. market. The price of rack-mount COTS servers will decrease, leaving the gap between converged products and simpler approaches more noticeable.
A whole-lifecycle TCO calculation is going to bear heavily on your decisions between a COTS server, converged system or the traditional mix of data center infrastructure.
About the author:
Jim O'Reilly was vice president of engineering at Germane Systems, where he created ruggedized servers and storage for the U.S. submarine fleet. He has also held senior management positions at SGI/Rackable and Verari; was CEO at startups Scalant and CDS; headed operations at PC Brand and Metalithic; and led major divisions of Memorex-Telex and NCR, where his team developed the first SCSI ASIC, now in the Smithsonian. Jim is currently a consultant focused on storage and cloud computing.