Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

SDS products present new performance, budgeting hurdles

Software-defined storage presents data center managers with new options, but they aren't free from challenges, particularly around hardware limitations and budgeting cycles.

A universal definition for software-defined storage doesn't exist yet; vendors often shape the definition to adhere to their offerings. But experts agree that software-defined storage emphasizes storage-related services rather than storage hardware, and the use of programming and policies to automate data center management. The technology's benefits range from flexibility to cost, but it doesn't come without its challenges -- especially since it's still new.

With software-defined storage (SDS), IT teams can provision and manage storage through software and APIs, which is a more agile approach than manually making changes. But for businesses where change is inherently slow and infrequent, this doesn't add more value. For example, financial and government agencies tend to be relatively static and tightly controlled. The ability to provision storage in seconds with SDS products loses its value when the change approval takes three weeks.

SDS products can suffer from hardware limitations

One of the challenges is that these scale-out storage systems are designed to be consumed by a scale-out workload.

Many consider the biggest value of SDS to be its software-defined nature. However, software must run on hardware, and the limitations of that hardware can become limitations to the SDS. For example, most SDS products use x86 servers with multicore CPUs. To maximize performance, the SDS product must be efficiently multithreaded -- an inherently difficult programming problem. When functions are tied to single CPU cores, performance can be limited. Faster CPU cores help, but clock speeds haven't increased in a long time. Driving full speed data from 10 gigabit Ethernet through a CPU to a Non-Volatile Memory express flash device requires a lot of careful tuning due to high throughput capabilities. These devices are all expensive and included in the final cost of SDS products.

In one category of SDS, a cluster of commodity x86 servers uses smart software to provide a shared storage array. Together, the group of servers provides enough capacity and performance for a workload, usually a group of virtualization hosts. Sometimes, the storage servers are VMs running on the hypervisor nodes -- a model known as Hyper Converged Infrastructure (HCI).

One of the challenges is that these scale-out storage systems are designed to be consumed by a scale-out workload. A cluster of ten storage nodes might serve a cluster of 30 hypervisor nodes. In effect, the cluster has 10 little pools of resources that each handles an average of three hypervisor nodes. These scale-out SDS products may not be able to deliver extreme performance to a single, high-demand workload. A single, physical database server may need more performance than one node can deliver. It is difficult to make a scale-out storage system aggregate all of its nodes into a single performance pool for a single workload.

Budgeting challenges with SDS products

Another challenge with scale-out storage systems, including HCI, is that they deliver the greatest value if you buy in small increments. Many scale-out storage vendors will suggest that you buy only the storage capacity and performance that you need for the year. As your requirements grow, they'll sell you a few more nodes. This just-in-time delivery means you get immediate, maximum value from the money you spend. Before you buy the next few nodes, a new CPU or faster solid-state drive might become available. The price/performance ratio improves so you get even better value on the next purchase.

However, most IT budget cycles are based on bulk replacement every few years, not incremental replacement spread over years. Organizations end up having to buy enough scale-out SDS for five years all in one block, which reduces the value per dollar spent and may make scale-out storage less financially viable than a monolithic array.

Since SDS is a recent phenomenon, SDS products are either recently developed or bolted onto existing products. Remember that newly developed products do not necessarily have the features of more mature products, such as good operational processes, stable feature sets and reliability.

Next Steps

How to implement SDS technology

Vendors offer choice when deploying SDS

Compare and contrast SDS vendors

Dig Deeper on Enterprise data storage strategies

Join the conversation


Send me notifications when other members comment.

Please create a username to comment.

What have your biggest challenges been implementing SDS in your data center?
Alastair, I do not mean to nag, but most of the points in the article are not true in practice. You're either looking at weak implementations of the SDS architecture or mis-understanding where it fits and will increasingly fit. 

As constructive discussion is welcome, I'll address a couple of points: 
- hardware limitations: any storage solution is ultimately limited by the hardware. However a good SDS solution is rarely CPU bound. Usually it hits bottlenecks in network. And of course it's a matter of proper system design / use case match - you'll hit bottlenecks somewhere. 
- Budgeting & cost: SDS is several times less expensive than matching traditional SAN alternatives, depending on which segment and product you compare. So even if you buy all storage needed on day 1 and amortize over 5 yr period, you;ll still be several times better off. On top of that SDS is not about storage - it reduces total cost, as it heavily optimizes compute & networking.