News Stay informed about the latest enterprise technology news and product updates.

Hyper-converged systems will underpin hybrid cloud, says Dell EMC exec

Enterprises will increasingly fill data centers with rack-scale hyper-converged infrastructure as the basis for hybrid clouds, says the CTO of Dell EMC's converged platforms division.

The most recent converged infrastructure market analysis from IDC confirmed the ongoing trend: booming interest...

in hyper-converged systems, and waning for traditional converged systems. But behind the numbers, what do enterprises really want?

SearchDataCenter caught up with Trey Layton, CTO of Dell EMC's converged platform and solutions division (CPSD) -- formerly known as VCE -- for the past seven years, to discuss the integrated systems market, the future of hyper-converged systems, the company's support for open source and plans to address the full stack.

What do you see as the major differences for customers evaluating hyper-converged systems between the reference architecture approach versus prebuilt products?

Trey LaytonTrey Layton

Trey Layton: When you talk about a reference architecture approach versus an appliance or rack-scale approach with some pretty defined configurations, the predefined configurations are designed to give a turnkey outcome. Those are based on engineering standards that we can repeat, so we can deliver a complete, ongoing operation and support experience.

For example, a customer has a problem, and they pick up the phone and call support. Because an appliance is built to a specific set of engineering standards, with software integrated in a particular way, with a fixed set of hardware options, we don't have to explore with the customer what configurations they have -- we already know. You get to solve the problem.

However, there are customer instances where they are deploying in a manufacturing plant or some remote facility of an oil rig where there are requirements to accommodate some workload that we may not have within the options of the appliances in our portfolio. Maybe they need 100 terabytes of useable storage in a node. Maybe they need more memory than we would offer. I'm not saying these are particular examples, but we find these all the time where a customer wants a certain type of CPU configuration -- maybe it's a low-power-usage CPU -- and those options are not available in the appliance.

The approach to the portfolio with an engineered outcome is to deliver a very precise, ongoing lifecycle experience. The reference architecture, ready-node approach is about maximum configuration and flexibility to accommodate some unique element of the customer environment. And they take on the ownership of the ongoing lifecycle because they are making sure the software-defined storage (SDS) is integrated into the environment in the way they want to deploy it to meet their specific requirements.

What are some specific ways for a customer to avoid confusion or making the wrong choice?

Layton: My recommendation to the customer is always to go down the path of an appliance first, because the total cost of ownership is lower. There are use cases where customers get into a situation where the standards we have in the appliance architectures that do not meet their business needs, and that is the only time where I would deviate to go to the ready-node configuration.

We sell to a lot of OEMs and organizations that build products for their own customers. As Dell EMC, it may make a difference for someone who is taking an OEM approach to our technology to acquire the ready-node configurations, because it allows them maximum flexibility to create their offer for their environment. They are just simply taking on a greater degree of ownership of integration versus having them done for them.

Where is there a greater need for functionality, or opportunity for greater innovation, in the hyper-converged systems area?

Layton: The interesting thing about the evolution of the hyper-converged market is that it initially started as CI [converged infrastructure] versus HCI [hyper-converged infrastructure]. I think customers are largely beginning to realize that CI and HCI will coexist because there are a lot of mission-critical applications that demand data services that are only resident in a traditional storage array. Those applications power the world's economy in a lot of ways, so CI and HCI will coexist, and those battles are largely subsiding.

We have also seen that when the players entered into the market for HCI, that it really started as an appliance play with small form factor deployments upwards of nine or 10 nodes with the maximum configuration out there.

We started to see in [the fourth quarter] last year that the shift has moved to consumption in core data centers where racks and racks of HCI -- or what we classify as our rack-scale HCI -- is growing significantly. We believe that is where the majority of the growth in the next year in the market is, in addition to the growth that already exists in the appliance share.

The next phase that you are going to see is: Within our portfolio at Dell EMC, we have also made significant investments in hybrid cloud architectures where, essentially, software technology creates intellectual property to spin the various integration layers together. We are finding that the future HCI portfolios will have that hybrid cloud intellectual property embedded within it. So, instead of thinking of hyper-converged as a means to hyper-converged infrastructure, I think the future area where you will buy hyper-converged architectures is to dramatically accelerate the deployment of a full cloud stack.

One of the biggest criticisms of the rack-scale and appliance-based approach is that component needs may not scale evenly, such as compute and storage. What are some of the ways you see customers overcoming that problem?

Layton: There are multiple layers of answers, depending on the situation that customers find themselves in. If a customer is acquiring an appliance, there are configurations that give the customer greater storage capacity and have less compute and memory. You need an Intel processor and you need memory just to run the software to host the SDS -- that is a requirement.

You typically find the balanced configuration when you have enough CPU and memory to accommodate the hypervisor and a certain complement of virtual machines. You typically see people targeting somewhere between 10 and 20 virtual machines per core. That is a general guideline; although, you do see some go with less and some go with more. You typically find the sweet spot of memory and CPU to support that many virtual machines per core, or within that range.

When you get into a situation where you need extra storage capacity, and you don't need compute, there are appliances that have minimal amount of compute and minimal amount of memory to run the SDS software, and you are simply adding those as storage nodes. There is also an option to acquire nodes that have small disk capacity -- maybe small flash drives -- and a lot of CPU and memory because you have some large clusters of storage. There is an ability today to sway either way. It really depends on the customer's workload and, most importantly, the customer's network environment to see how much they benefit by swaying those configurations -- either more to compute or more to storage.

The second layer of the answer is that 90% of the hyper-converged deployments in the market are designed for a hypervisor environment. They are either running KVM, VMware, Hyper-V -- some hypervisor with the intent to consolidate virtual machines on the hyper-converged architecture. There are some that are actually running a bare-metal OS.

In our VxRack Flex product, which uses ScaleIO as the SDS, we afford customers the opportunity to run a bare-metal OS -- for example, Red Hat Linux optimized for Oracle. In that instance, we have the ability to have a customer run a storage-only node that runs Red Hat Linux and allow a customer to do an array-based storage expansion if they wanted to. We also sway the other way on the compute side. But in the appliance category, you are typically dealing with the hypervisor. In the rack-scale category, we add another dimension of being able to do a bare-metal OS.

Last fall at Dell EMC World in Austin, Dell EMC revealed that PowerEdge would go into most -- but not all -- of your products. What has been the feedback, and what has happened since then? Are any other moves similar to that one in the wings?

Layton: PowerEdge is resident in our certified reference systems portfolio, our Blueprint portfolio. It is very present there in 100% of our certified reference systems portfolio. It is 100% of our HCI portfolio. In the integrated infrastructure segment, obviously, we have a key business partner in Cisco. We have no plans to do anything to replace that integrated infrastructure offering.

In the future, might we consider the creation of a PowerEdge-based integrated infrastructure? That decision has not yet been made. Right now, we are very comfortable with the portfolio as it is.

Also at last year's Dell EMC World, you widened the VxRack options. But already one of the products has been pulled back -- the Neutrino flavor on the open source end. At the time, there was a lot of talk about support for open source. Where does open source support stand today within CPSD?

Layton: We have great support in our certified reference systems -- our Blueprint portfolio -- for open source outcomes. We find that portfolio best serves that market because there is so much variance in how customers choose to consume and deploy open source architectures. There has not been a market that's emerged for consistent integration standards, so that is why we transitioned Neutrino, and the intellectual property associated with it, into our certified reference portfolio.

The next phase that you are going to see is ... that future HCI portfolios will have that hybrid cloud intellectual property embedded.
Trey LaytonCTO, converged platforms and solutions division, Dell EMC

With VxRack, being an integrated engineered system, we focused it specifically on the multi-hypervisor segment -- we classify that as KVM, VMware, the dominant share players. This year, you will see some Microsoft announcements from us in that space. The VxRack Flex portfolio is targeted for rack-scale HCI there.

In the appliance category, we use the Dell XC portfolio, which is the Nutanix OEM -- that is our appliance answer. The difference between appliance and rack-scale HCI is that with appliances, you bring your own network, and rack-scale HCI, you have the network built into it. On the flip side of that, we have the tightly codeveloped VMware offering, which is VxRail from an appliance perspective and VxRack software-defined data center from an HCI rack-scale perspective.

We believe that those four products allow us to address the largest segment that is addressable of the engineered systems or integrated infrastructure marketplace. We are addressing the open source realm through the certified reference systems or blueprints from our portfolio.

Azure Stack will hit the street later this year as generally available. How is CPSD approaching it? Will you arrange to have your products be part of it, or will you offer something similar to reach the same results?

Layton: Our development strategy is to participate in the Azure Stack ecosystem. We intend to be a leader in that marketplace with our hyper-converged architecture. It is not a competitive approach to us; it is to engage and participate in that ecosystem.

So, Azure Stack will be delivered through CPSD?

Layton: It absolutely will. Our current plans are to approach it both from an appliance perspective, as well as a rack-scale hyper-converged perspective. It is a similar approach to us supporting any other hypervisor. If customers are going to make investments in building applications in Azure, and they are going to make on-premises investments, we need to deliver an architecture where we will deliver equipment for that stack. That is our plan, and that is our intention.

But out of the gate, Microsoft's strategy has been to sell it through the four OEM partners with a certified hardware product.

Layton: In our development program now, we [have] investments to build out that capability for Azure Stack and to be a participant in Microsoft's program for Azure Stack. We will be an equal participant in that process.

Robert Gates covers data centers, data center strategies, server technologies, converged and hyper-converged infrastructure and open source operating systems for SearchDataCenter. Follow him on Twitter @RBGatesTT or email him at rgates@techtarget.com.

Next Steps

Use hybrid cloud in hyper-converged cloud systems

What do you know about HCI technology?

IT management evolving through HCI

Dig Deeper on Enterprise data storage strategies

PRO+

Content

Find more PRO+ content and other member only offers, here.

Join the conversation

1 comment

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

What will become the foundational infrastructure for the hybrid cloud in your organization?
Cancel

-ADS BY GOOGLE

SearchWindowsServer

SearchServerVirtualization

SearchCloudComputing

Close