Hyper-converged infrastructure options simplify virtual environments
A comprehensive collection of articles, videos and more, hand-picked by our editors
NEW YORK -- There are plenty of converged infrastructure products on the market today, and each offers something slightly different than the next. What do these stacks really do, and will the market grow?
It's more than just converging storage and servers, said Doron Kempel, CEO of SimpliVity Corp., one of the earliest hyper-converged infrastructure providers, at TechTarget's Modern Infrastructure Summit last week. Companies that can solve data capacity, protection and mobility problems are the next big wave of hyper-converged offerings. In fact," hyper-converged" is just a buzzword that doesn't get at the true evolution of these products, Kempel explained.
Kempel comes from a long background in storage, as a former vice president at EMC and founder of Diligent Technologies, a deduplication company. He addressed the hyper-convergence market's progression and future in this Q&A.
What exactly is the difference between converged and hyper-converged products?
Doron Kempel: You look at large enterprises today and their IT is comprised of about 12 different products that they buy from as many different vendors. Those products include a server, a switch, then storage (an SSD array), then potentially a backup deduplication appliance, a WAN optimization appliance, a cloud gateway, perhaps data caching, then two, three or four data protection applications. 'Convergence 1.0' ... takes a server, storage, switch and [virtualization] and puts them in one container, but it doesn't take away the environmental costs of space and power, and you still need to buy all the other products. Those do not address what we call the 'data problem.'
In order to take the promise of Web economics and bring it into the enterprise, you can't just converge storage and server. You also need to include all the functionality that solves data problems -- five or six different products that dedupe and compress a single phrase in the lifecycle of the data. We introduced a data virtualization platform ... that does that very fast at the point of inception when the data is written by the application.
What were the problems organizations had that weren't being solved before hyper-convergence came along?
Kempel: My sense was that if somebody is going to solve the data problem at the point of origin, then all of us dedupers are going to be out of a job. ... [Second], all the technologies are now available to build a 21st century product that assimilates all the functionality [of the mainframe] on x86 very efficiently. If we solve the data problem, that allows us to bring the promise of cloud economics into the data center.
When the data center people look at the cloud people -- Amazon, Google etc. -- what they say about them is that the cloud guys treat the applications the way a farmer treats a chicken: They're all uniform. Data center people treat the applications the way a puppy lover treats a puppy: It has a personality. So to create the economics of the cloud -- the chicken-farmer logic into the puppy-lover world -- you need to introduce more capabilities. It's a difficult problem to solve and that's why it took us three and half years just to develop our product.
Why is the data virtualization part of hyper-convergence so important?
Kempel: The data virtualization platform is basically the underpinnings that allow you to associate very agile, granular deduped or compressed data with the applications that own it -- once, and everywhere. It addresses IOPS, data mobility, protection and capacity.
We used to ship 18 GB drives; today we ship 3, 4, 5, 6 TB drives, which means the density of the drive increased about 300 times. But the RPM, the performance of the drive, increased 45% or 50%. It's as if we used to drink with a straw out of a cup, but now, with the same straw we're drinking out of a swimming pool. It doesn't work. There is a major IOPS problem. So what do we do? We throw SSD [solid-state drives] at the problem. But SSD is very expensive. So if we dedupe the data before it ever hits the disk, we reduce the number of IOPS.
Second, the problem is still capacity, but it's not the capacity on the disk drive, it's the capacity that it needs to travel on your network. And we all agree that today's IT needs to cross geographies. And in order for data to be mobile, it needs to be deduped and compressed and optimized.
Now, onto protection. There's a disconnect between the way we manage data and the way we manage storage. In our architecture, the data belongs to a virtual machine. You can manage all the VMs [virtual machines] globally and make protection decisions on a per-VM, per-application basis.
What do you think about EMC announcing that they will build a hyper-convergence platform?
Kempel: Everybody is going to get into the converged space, because there's no alternative. That's what the customers want. If they are able to deliver a new solution to the data problem and do it within a homogenous stack as opposed to all different software running on a lot of x86 resources, then that's great. They will further validate what we've been doing. When they made that announcement and they referenced us, we viewed that as a great compliment.
With this growth, what's the next big thing in this market going to be?
Kempel: This wave is the big wave for the next decade or two. I see incremental innovation, where basically even the cloud players are going to benefit from this technology. Their technology is already 10 years old.
Stumbling blocks to cloud cost savings
IT shouldn't fear for jobs in the face of cloud