Introducing the software-defined data center

The software-defined data center is here to stay, but what does that mean for IT pros?

This Content Component encountered an error

A few years ago, someone asked me what I thought about the future of storage in the data center. At the time, I quipped that we'd eventually just have giant piles of solid-state memory, and software would differentiate vendors.

Look around the data center today and you see it happening, not only in storage but other areas as well. Custom application-specific integrated circuits (ASICs) in network switches are being replaced by "merchant silicon," a fancy name for commodity processors from Intel and AMD. The core features of servers among vendors are becoming indistinguishable. And the storage arrays are really just Intel servers themselves, often running Linux or embedded Windows, with a lot of network interfaces and drives.

It's the software, stupid

So what distinguishes one vendor from another? It's software. Software is what implements incredibly fast, low-latency networks on top of commodity processors as exemplified by companies like Arista Networks. Intelligent software is what enables storage vendors to eschew expensive "enterprise" solid-state disks in favor of inexpensive consumer-grade drives and still achieve high reliability and performance.

Software on management controllers -- which are themselves small PCs bolted inside servers -- is what separates one server vendor from another these days. Putting hardware device functions in software is great for vendors. Commodity hardware not only drives down costs but also makes it easy to update a device.

What if the infrastructure handled all the tedious, error-prone, repetitive work, so us humans could work on the hard and interesting problems instead?

Bob Plankers

Hardware has bugs, too, just like software. When most of the functionality of a device is implemented in software, a fix is just a firmware update away. With custom hardware, custom circuitry or custom ASICs, that task isn't as easy, and sometimes not even possible. Commodity hardware also allows a hardware developer to use pre-existing drivers, speeding development cycles and lowering costs. Less complex circuitry also leads to lower power consumption, which in turn leads to less heat and higher reliability. Everyone likes that.

Given all that, I think it's safe to say that software is truly the most interesting part of hardware now. Since commodity hardware saves so much time and money, could we take it a step further and just eliminate all custom hardware itself, like that in a network switch or storage controller? Vendors like Dell and HP do a great job of producing what are essentially inside storage arrays and network switches anyhow. They also provide well-known monitoring interfaces and all manner of hardware support. What if the only thing that a traditional hardware vendor shipped was a software image? Can we rely on the server vendors, and perhaps a hypervisor on top, to supply the rest?

It turns out that the answer is "Absolutely." We can implement a network switch entirely with software with nearly every feature as the hardware equivalent. We can implement a firewall, a load balancer or an intrusion-detection system. We can implement distributed storage arrays, calling on the time-tested and fast local disks and controllers, high-bandwidth network interfaces and well-known network protocols to replace proprietary parallel storage networks. We can also rely on the availability features of a hypervisor to help us if there is a problem.

Once we let the server vendors do what they do best, we can refocus all the effort we used to spend on hardware design and support to problems like integration. What if, when a server was provisioned, the storage started replicating its data automatically? What if, when a server is decommissioned, the load balancer and firewalls automatically removed the rules and closed the ports? What if there were industry-standard application programming interfaces so that automation could occur no matter whose firewall or storage was in use? What if the infrastructure handled all the tedious, error-prone, repetitive work, so us humans could work on the hard and interesting problems instead?

That's a data center future I look forward to.

ABOUT THE AUTHOR: Bob Plankers is a virtualization and cloud architect at a major Midwestern university and author of The Lone Sysadmin blog.Write to him at moderninfrastructure@techtarget.com.

NOTE: This article first appeared in the October issue of Modern Infrastructure.

This was first published in November 2012

Dig deeper on Data center server virtualization

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

1 comment

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchWindowsServer

SearchEnterpriseLinux

SearchServerVirtualization

SearchCloudComputing

Close