BACKGROUND IMAGE: iSTOCK/GETTY IMAGES
Eric Hanselman, research director for networking at 451 Research in Boston, agrees that the software-defined data center (SDDC) is about improving integration, as well as about automation. The goal is to take activities that often involve physical changes and manual processes, and integrate them with other, more automated data center practices, he said. The starting point is virtualization.
"You have to have a certain amount of abstraction to deal more flexibly with the various resources," Hanselman said, "but the real value is achieving a higher level of management integration. It doesn't have to be cloud initially, but it is going to look like cloud."
An SDDC's primary goal is to make it easier to change server, storage and in particular, network configurations. An SDDC does that, Hanselman explained, by automating the partitioning of the entire data center infrastructure and spanning that architecture, and by scaling and thus delivering much greater efficiency. At the moment, it is still early days for adopting an SDDC, he said. So far, the organizations that are building hyperscale data centers are those blazing the trail for these data centers. "There have been some widely discussed implementations, particularly Google," he said.
Google has done interesting things to shift internal capacity dynamically using homegrown applications and its own OpenFlow control switches, Hanselman said. That has been further supplemented by having a dynamic scheduling system that shifts capacity as systems perform more traffic-intensive activities, such as replication.
There are two major players in software-defined infrastructure, according to Hanselman. Nicira's Network Virtualization Platform, or NVP, enables the dynamic creation of a virtual network infrastructure, as well as services that are completely decoupled from the physical network hardware. Big Switch Networks offers what it calls "open software-defined networking (SDN)." Hanselman said both companies aim to use virtual connections across virtualized environments, then extend their reach through tunnels into other virtualized environments.
Other companies have or are developing capabilities in these areas, however. Brocade, for example, makes it possible to terminate a tunnel that starts in a virtual world in a physical device.
An SDDC can boost data center efficiency, Hanselman said. "With virtualization, we improved the efficiency of individual servers. [An] SDDC starts with the same situation for the data center," he said. "Where in the past you had to dedicate a server for a database or other application, now you can divide resources as needed. We have moved from an architecture where, because of the network, you had to build pods or tiers; now, [an] SDDC permits tasks to move around," he said.
Furthermore, in the past, data centers were limited in their ability to move applications that needed high performance because of the limits of Fibre Channel connections. With software-defined infrastructure, the storage area network is connected to a network environment that can abstract the connection to wherever the server is used, as needed, using iSCSI or Fibre Channel over IP. "To make this work, especially for storage, you need high performance; what an SDDC does is take advantage of software networking capabilities to … ensure it has the necessary performance," said Hanselman said.
Efficiency will be particularly appealing to some, said Nick Lippis, publisher of The Lippis Report, which targets IT and network decision makers. The SDDC has evolved partly because of the pressure on virtualization companies to have more highly integrated stacks, and for those stacks to have automated provisioning attributes, he said. "We have distributed computing with centralized automation and one-person management, but in networking we still have operational bloat," he said. "End users don't want to have to keep adding people as the networks grow."
Up until now, Lippis said, networking was an oligopoly with relatively few players, and in which ease of management was an afterthought. He compared the SDDC to what he termed the "revolution" in home entertainment, when the universal remote started to allow easy control of multiple devices from one point, which simplified both configuration and use. "Once you have everything wired and you have centralized control abstraction, then you can start doing interesting things to control a network," he said.
In the software-defined infrastructure vision, everything is wired once, then network agents can manage devices and protocols. "Hopefully at some point in time we will see applications simply requesting services from the network, but clearly we are not there yet," Lippis said.
Lippis noted that a lot is happening relative to SDDCs in the Open Networking Foundation, which is developing standards for open networking and software-defined networks. He helps host an open-networking user group. "We have support from large companies like Fidelity and JPMorgan Chase & Co. All those firms are involved because they have a problem," he said. "In the IT networking world, there is a ratio of about one engineer to every 50 routers, whereas in the mobile market, companies like Sprint have one engineer managing thousands of endpoints. That is why these companies are making such a big push for [the] SDDC."
Further, Lippis said, "the larger IT buyers are starting to meet with startups in this space; they don't really want the large network vendors there, because they don't believe they have an interest in making this technology happen."
The advantages of an SDDC aren't just hype, said Arun Taneja, analyst at the Taneja Group. Automation allows you to set and achieve Quality of Service targets and treat the whole physical infrastructure as a pool, he explained. "A lot of the physical structure may still look familiar, but with [an] SDDC you will have the ability for applications to find the connectivity they need at the performance level they require, without having armies of people to manage the process," he said. "In the age of the cloud," he added, "there is no way humans can manage the thousands of elements in the infrastructure."
Conceptually, Taneja said, unlike traditional deterministic networks, where humans define the pathways, SDN and the SDDC, like the Internet, rely on heuristic approaches to find optimal paths. "What we have learned about virtualization so far is that solving two parts of the problem -- computing and storage -- just shifts the bottleneck somewhere else, namely to the network," he said.
Cautions and advice about the software-defined infrastructure
Most discussions about SDDC are missing the business process and policy definition aspect, said Jim Damoulakis, chief technology officer at Southborough, Mass.-based GlassHouse Technologies, a consulting and advisory firm. "The technology breakthroughs are important, but you need to have a plan in place to use the technology in an effective way," he said. "Otherwise you are getting a tool set, but you don't know what you are building. IT is often guilty of overprovisioning -- building just in case, instead of just in time."
Although there are clear advantages to an SDDC, there are complexities and pitfalls as well, especially related to vendor selection, Damoulakis said. "This falls straight in line with the movement toward the private cloud," he said, "but you still have to look at how some of the components have been defined and, in some cases, maybe wait for clarity and a clearer sense of direction."
A starting point for investment decisions is to review your existing technology. A legacy application running in a traditional data center model, for example, might not be a good candidate for an SDDC. However, there is a clear use case for an SDDC with the "higher-volume and standardizable services that IT is regularly called upon to deploy," Damoulakis said. "Those could be better handled in a fast and efficient way through [an] SDDC."
About the author:
Alan R. Earls is a Boston-area freelance writer focused on business and technology.