So far, no vendors have a complete software-defined data center offering. Instead, organizations need to assemble...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
a collection of hardware and software parts from multiple vendors. While hyper-converged infrastructure preassembles software-defined parts, it can't deliver a whole software-defined data center.
Before purchasing and implementing a complete software-defined data center (SDDC), determine how much of a software-defined platform your business needs and what collection of products will meet those needs.
What defines an SDDC?
An SDDC is a conceptual infrastructure in which every element is controlled through abstraction, pooling, automation and policy. Infrastructure as code is a key element of this software-defined platform. Conventional enterprise IT infrastructure relies on manual builds of many unique elements. An SDDC uses version-controlled source files that describe the desired infrastructure, driving automation and producing a consistent and repeatable build. This consistency and repeatability is the foundation to deliver services to your software-defined platform users.
A complete SDDC is almost entirely automation-driven. Developers or business units consume the infrastructure using application program interfaces (APIs) and automation. Most organizations aren't ready for this level of SDDC because people and business processes hold the IT automation back, changing slower than SDDC technology.
You don't need fully automated and abstracted infrastructure to get the benefits of SDDC, however. An SDDC contains a collection of software-controllable components that might include:
- Hypervisor: software-defined CPU and RAM, with some storage and networking;
- Software-defined networking for both physical and virtual networking;
- Software-defined storage to pool and stratify various storage resources;
- Configuration management software for hypervisor hosts, VM operating systems and applications;
- Software-defined software, such as Docker and other container management tools, that enable application developers; and
- Software-defined operations, such as backup, disaster recovery (DR), capacity management and performance management.
Rather than manage SDDC's various dimensions through direct manipulation of every element on each VM, application or physical server, the IT team manages an SDDC through a series of policies. To achieve automation, each software-defined element needs to have a good automation mechanism with an API to integrate with other automated processes. Or, its own configuration files should be versioned and source controlled. The software-defined platform enables a policy to programmatically apply to collections of VMs, compute, network and storage components of the data center.
The role of HCI in SDDCs
Hyper-converged infrastructure (HCI) is primarily software-defined compute, coupled with software-defined storage. All HCI vendors offer some sort of distributed storage running in or on a hypervisor, and include provisioning and management of the underlying physical servers. The hypervisor platforms provide some software-defined networking within each physical node. This is a start, but it's far from a complete platform.
One of the enablers for SDDC is an automation API for provisioning and configuring the HCI. For a full SDDC, a hyper-converged box must allow for the deployment of additional automated nodes, and for that automation to be version-controlled. Ideally, policies should pool and automate the spare HCI capacity, then allocate it where it is required within the infrastructure.
It's important to keep the hyper-converged infrastructure definition simple, said George Crump, founder of analyst firm Storage Switzerland.
Some hyper-converged vendors offer more SDDC with additional features. The most common are backup and replication, which are integrated into the storage and are policy-driven in the HCI management console. On HCI, backup and replication should be controlled by policies aligned to business requirements rather than technologies. For example, a policy might state that a critical system must be backed up every four hours. Another policy might state that the DR copy of the same critical system should never be more than one hour behind the production system. Policy-based management is a central objective for an SDDC and should be tied to the provisioning of capacity and workloads.
Most HCI products do not offer software-defined networking to link the nodes and VMs. None of the HCI platforms integrate physical switch configuration; instead, they mostly rely on the hypervisor to set up virtual networking. None of the HCI platforms manage the operating system or applications inside the VM, so you'll need to add configuration and application management tools, such as Puppet, Chef or Ansible.
While HCI platforms do not provide a complete SDDC, that's not their intended goal. HCI products are generally a good software-defined platform from which to build out an SDDC. As competition intensifies among HCI vendors, we are likely to see further software-defined capabilities added to the platforms.
How to perform automated API testing
Future-proof with software-designed infrastructure
Why you can't achieve IT automation overnight