kentoh - Fotolia
Even though its exact definition remains murky, a software-defined data center is something that IT pros are starting to strive for. By reducing the physical hardware in a data center, in favor of technologies like software-defined networking and software-defined storage, IT teams can increase automation, efficiency and gain a number of other benefits.
In a world where efficiency matters in the data center -- in power usage and application performance -- condensing everything into software and deploying it all in one shot makes a software-defined data center (SDDC) appealing. IT teams can deliver SDDC services through virtualization and other means -- but must do so carefully, and the services should be considered early in the SDDC planning stages.
When you decide to move forward with SDDC adoption, it's helpful to be aware of the many terms and technologies that coincide with an SDDC. This condensed list of SDDC planning terminology offers an introductory look at what to expect when your data center is ready to make the transition.
Software-defined networking (SDN): With the help of virtualization, SDN aims to limit the number of manual steps required to manage and troubleshoot networks, and gives administrators control over the network from a single platform. Along with an SDN controller, intelligent networking manages traffic flow and tells switches where to send packets.
Automation: Automated task management and resource scheduling drives an SDDC. For IT managers, one of their primary goals with an SDDC is to automate compute, storage, security, networking and other data center services through a single platform. Aside from other benefits, automation allows admins to focus their energy on other areas of the data center.
VMware has made strides in automation for SDDCs, and other tools, such as Puppet and Chef, can help automate configuration management.
Software-defined security (SDS): With data protection still a top priority during SDDC planning and adoption, automated intruder detection, network segmentation and other capabilities make SDS another integral term. SDS lets companies monitor countless aspects of the data center, and helps ensure security policies are consistently enforced. For organizations that adopt SDN, and eventually, a full SDDC, software-based security models are a better fit to help support and secure open protocols, such as OpenFlow.
Network virtualization: To make the transition from high-density hardware during the SDN and SDDC planning stages, network virtualization is one of the major components. Optimizing network speed, reliability and flexibility is at the forefront of network virtualization, which allows network admins to manage files, programs and other components from one location.
To implement network virtualization into the data center, consider Layer 2 or Layer 3 constructs or overlay networks to transition the workflow to a virtual environment. This deployment should be a cheaper and faster long-term option than a traditional data center network.
Software-defined storage (SDS): SDS shifts the emphasis from traditional storage hardware to storage-related services, and moves features such as deduplication and replication to software. The flexibility of SDS appeals to administrators, and allows them to control and automate storage resources through programming and a policy-based management system. Admins also see a boost in efficiency due to the lack of a physical storage system.
Storage virtualization: Storage virtualization is the technology that enables admins to pool large amounts of storage resources from multiple pieces of hardware to manage them from one location. Admins can manage and perform backup, archiving and recovery more quickly than they could with physical storage.
Server virtualization: Server virtualization technologies allow administrators to mask server resources from users, and divide physical servers in a data center into multiple server environments, called virtual private servers. In SDDC planning, server virtualization is a step toward autonomous computing, as the environment mostly manages itself and needs little outside involvement. Possible server virtualization models include a paravirtual machine model, a virtual machine model or an operating system layer with virtualization.
Application Program Interface (API): An API is code that allows different software programs to communicate with each other, after a developer writes a program to request services from an operating system. You can add middleware if the applications are written in different languages, extending the reach of the communication. In an SDDC, admins use APIs to manage compute, storage, networking and other components.
The syntax of APIs can be confusing, but there are ways to work around these issues and use them effectively in the data center.
Composable infrastructure: Composable infrastructure treats data center resources as a service, and allows admins to logically pool them instead of physically configuring hardware. Developers can use composable infrastructure to define an application's requirements, and then use APIs to define the infrastructure components an application needs. With composable infrastructure, the physical location of the different pieces of hardware is no longer important. This allows for flexibility in SDDC planning and adoption.
Composable infrastructure is similar to infrastructure as code.
Hyper-converged infrastructure (HCI): HCI refers to a system that tightly integrates compute, storage, networking and virtualization resources.
HCI and SDDC both offer administrators the chance to eliminate bulky hardware in favor of a single platform to manage data center tasks. However, eliminating silos in favor of a single platform could be an area of concern, especially because of vendor-lock risks.
Scalability and integration are key for SDDC planning
A full SDDC architecture is still not here
It's a work in progress for SDDC