Modern Infrastructure

Container trends fuel storage needs


CI and disaggregated server tech can converge after all

Converged IT and disaggregated servers are trends that work together. Learn how you can use both to optimize the performance and cost-efficiency of data center infrastructure.

I've talked about the inevitability of infrastructure convergence, so it might seem like I'm doing a complete 180 degree turn by introducing the opposite trend of infrastructure: aggregation. Despite appearances, disaggregated server technology isn't really the opposite of convergence. In fact, disaggregated and converged servers work together.

In this new trend, physical IT components come in larger and denser pools for maximum cost efficiency. At the same time, compute-intensive functionality, such as data protection, that was once tightly integrated with the hardware is pulled out and hosted separately to optimize performance and use cheaper components.

Consider today's cloud architects building hyper-scale infrastructures; instead of buying monolithic building blocks, they choose to pool massive amounts of dense commodity resources. Unlike converged infrastructure (CI) with pre-built racks of standard IT equipment that help ensure rapid deployment for Oracle and SAP or stacks of hyper-converged appliances that offer highly leveraged IT operations and predictable capital expenditure, new cloud infrastructures need to run many, any and all workloads brought to them at scale. Cloud architects therefore take advantage of both logical consolidation and massively dense physical resource pooling.

What's interesting is that both larger trends -- convergence and disaggregated servers -- depend on software-defined resources. Software-defined resources take advantage of ever-increasing compute power and decreasing cost of silicon computer chips. Converged environments use software to define and manage multiple resources and all their functional capabilities on a single host. Meanwhile, disaggregated server approaches separate out those software-defined resources and host them closer to workload executions paths, or on a host with ready capacity, for massive pooling of denser and often simpler physical components. The commonality is that software-defined capabilities run where they provide the best performance and agility, while physical resources are deployed in the most cost-efficient formats for a given scenario.

These convergence and disaggregation trends come together in an interesting way. IT functionalities will always mature and converge, especially where automation and integration can alleviate the need for painful and expensive 'silo'-ed management. Yet we also see underlying physical components becoming more modular in increasingly pluggable and fungible formats. In practice, I predict that the main point of converged IT becomes the whole data center rather than the rail or the rack within it.

Key things to look for to future-proof your next iteration of IT architecture:

  • Scalability: Linear scale-out pooling and clustering will eventually be a key design criteria for just about every facet of IT infrastructure
  • Modularity: Evaluate emerging plug-and-pool resources --  such as server blades, shared flash like EMC's DSSD, HP Moonshot cards for pools of CPU and GPU or memory grids -- that could greatly increase density, lower cost per resource unit, encourage performant sharing and simplify infrastructure support
  • Composability: Look to cloud orchestration, containerized components and hybrid management layers that can converge software-defined resources directly and create any desired quality of service out of disparate and distributed underlying infrastructure

By using both converged IT and disaggregated servers, you can build the best of both worlds -- a low Opex, commodity-priced and scalable plug-in infrastructure that supports the quality of service needs of a given application through converged and composed software-defined layers.

Of course, infrastructure and application performance management is still a real challenge. In these pooled, containerized and software-defined resource environments, what do you do when application performance goes bad? For any kind of troubleshooting, performance management or capacity planning, how can you peel back the layers of the onion when the onion itself is dynamically changing?

One answer is to manage our infrastructure as its own internal Internet of Things. We will need to apply big data analytics, sophisticated machine learning, and maybe even some advanced AI to optimize our logically converged but physically disaggregated infrastructure. But given that Google's AlphaGo has recently just defeated a top human Go champion -- a significant achievement in machine learning by one of the biggest cloud providers -- I'm optimistic that we'll soon see very smart IT management technologies capable of handling these cloud infrastructures.

Mike Matchett is senior analyst at Taneja Group. Reach him on Twitter: @smworldbigdata.

Article 4 of 5

Dig Deeper on Server hardware strategy

Get More Modern Infrastructure

Access to all of our back issues View All