.shock - Fotolia
Data centers implement solid-state storage for better storage performance, but potentially create a network bottleneck at the same time.
A flexible and scalable data center network must be capable of handling the enormous IOPS potential of solid-state storage as disks and server-based storage acceleration devices, which often means supporting software-defined networking (SDN). This is a persistent problem with data center architecture and design: New technologies improve performance and prompt new bottlenecks in another place in the infrastructure.
Solid-state storage and SDN complement each other, boosting apparent application performance while maintaining optimum network performance. SDN separates a switch's control and decision-making capability from its physical traffic handling operation. The control plane typically relocates to a virtual machine application or virtual appliance. IT administrators define and configure traffic flows across the network rather than cobble them together by manually configuring individual switches and other network devices.
Solid-state storage builds network bottlenecks
Solid-state disks (SSDs), PCIe-based solid-state accelerators (SSAs), storage-on-DIMM and other high-performance storage devices offer low latency and high IOPS. Most enterprise workloads -- from simple page swapping to caching to VM load times to snapshot storage -- benefit significantly when compared to how they worked on traditional magnetic hard disk drives.
Solid-state storage can cause other bottlenecks in the enterprise network. For example, an SSD-based storage array running Fibre Channel over Ethernet easily floods multiple Ethernet network links, moving the performance bottleneck rather than dissolving it. Network utilization also becomes more erratic when SSAs on the server side act as local caches or remote storage devices, causing periodic network traffic bursts when cache misses occur.
Software-defined networks respond programmatically to changes in workload transmission performance and traffic demands, a noteworthy benefit over manually configured network infrastructures.
In-network deduplication and SDN performance
Data deduplication is a form of compression that reduces storage demands. When combined with other technologies like thin provisioning, data deduplication reduces storage use and costs.
Data can be deduplicated at the source (the workload or server side) or at the target (the destination or storage subsystem). If data is deduplicated at the source, less network bandwidth is required to move the data to or from storage, and this conserves network bandwidth use. For example, a file deduplicated 10-to-1 at the server uses only one tenth the amount of network traffic to move the same effective amount of data to storage. If deduplication occurs at the destination only, there is no network bottleneck improvement.
Recent experimentation suggests the possibility for deduplication performed on data in flight across the network by including capabilities in the SDN control layer. Since the SDN control layer is usually deployed as a virtual machine or appliance, adding a deduplication service is not prohibitive. Although there are no commercial products yet to handle in-network deduplication via SDN, the principle is proven.
SSD storage is a natural fit for in-network deduplication tasks because the high IOPS throughput and low latency suit storing and indexing the hash fingerprints of deduplicated storage blocks.
Performance counts with solid-state storage
Ready for SDN?
Get a grip on flash performance claims
Is the FCoE standard ready to use or not yet reliable?