Guide to software-defined everything in the data center
A comprehensive collection of articles, videos and more, hand-picked by our editors
Adoption of software-defined approaches in data center equipment and software has been embryonic and confusing,...
despite strong vendor rheotirc.
What does a software-defined data center offer you as an IT practitioner? And what are the business values?
The purpose of software-defined computing is to remove intelligence from the hardware and abstract it to a far more standardized software layer. Not only does this make a heterogeneous data center work more effectively, it also allows the IT team to introduce new functionality more easily.
In a cloud architecture or highly virtualized data center, an application shares resources across a broad set of IT equipment. For example, if a data center uses a heterogeous mix of network switches from Cisco and Juniper Networks Inc., the major differences in how Cisco's IOS and Juniper's JunOS work will create difficulties in a virtualized network. Maintaining fidelity of function over various different equipment types becomes important.
The barriers to software-defined computing
Software-defined everything is unlikely to reinvent the data center, much to some theorists' chagrin.
While a complete move to software-defined networks -- with all intelligence abstracted to a software layer under an OpenFlow standard -- makes a great deal of sense to users, it kills Cisco's and other vendors' business model. A $100 dumb switch would be just as good as a $100,000 intelligent switch; all it has to do is offer the capability to shift packets while all the management and control occurs outside of the hardware. The same applies to storage: EMC Corp. is unlikely to embrace software-defined storage to the point where its hardware is so commoditized you may as well just buy any storage array from anywhere.
The second issue is more pertinent: A completely software-defined realm just doesn't work.
Essentially, software-defined constructs should leave only the basic drudge work of moving bits around to the hardware. All the intelligent action should be done at an abstracted level. Therefore, the bits, within their packets, have to move from the physical plane up to the software-abstraction level for actions to be taken; then they have to be sent back to the physical plane to move forward. If a specific packet can be identified and the originating and receiving end understands all of this, then the system can ship the packet through without the middle bits having a complete understanding of what is happening. However, this has its problems.
Consider IT operations at a large service provider that deals with a large volume of traffic. If the company uses standard software-defined computing practices, significant traffic will move from the physical to the software layer, then back again once actions are taken on it. This introduces latency into the system. It also requires the IT team to engineer the software layer to handle peaks of activity.
Abstracting all intelligence into the software creates another problem. Think of the packet that was identified and tagged as being something specific at one end of the process. This should not be too different from multiprotocol labeling services, but it requires either that packets move up to the software layer again, adding latency, or that the degree of trust within the network be high enough to allow unknown packets to pass without being checked. If the transmitting end is a bad network citizen -- part of a botnet or other malware-originating network -- or the transmitter is hijacked by a man in the middle, rogue packets could be injected into the network with relative ease.
The service provider community is not happy with software-defined networking, and is looking at its own means of minimizing any such issues through network functions virtualization.
Software-defined everything -- or some things
The future of the data center is likely a hybrid system where certain functions, particularly around management, are abstracted into the software layer. However, much of the control functions around the packets of data within the network, on servers and around storage will still occur at the box itself.
It is unlikely that Cisco, Juniper and other network companies will all get together and create completely standardized switch operating systems. Neither will storage companies like EMC, NetApp Inc. and Hitachi Data Systems. And although there are high levels of standardization at the server level, IBM, Dell Inc., Hewlett-Packard Co. and others still want to differentiate their offerings by building intelligence in at the silicon level.
Software-defined everything is unlikely to reinvent the data center, much to some theorists' chagrin. Certain functions will move to the software level, facilitiating updates and interaction between heterogeneous equipment. But with real intelligence remaining at the silicon level, the chances are low for a completely heterogeneous data center with high fidelity of function end to end.
For the data center manager, expect more "business as usual" than "brave new world"; some functions will be made easier, some will remain difficult.
Clive Longbottom is the co-founder and service director of IT research and analysis firm Quocirca, based in the U.K. Longbottom has more than 15 years of experience in the field. With a background in chemical engineering, he's worked on automation, control of hazardous substances, document management and knowledge management projects.