To keep today's high-core-count servers running at full speed, admins must vigilantly attend to storage effici...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Generally, the closer storage is to the server that's using it, the faster that server runs. That's just the beginning when it comes to optimizing servers and storage capacity. With networked storage, intra-rack local storage reduces the traffic load on the backbone networks across the data center.
More than one answer exists for local storage. We could put storage drives in servers (direct-attached storage, known as DAS), but if a virtualized server goes down, the "state" stored on that server is no longer available to its replacement, an unacceptable outage for business data and workloads.
Modern data center storage options range from the simple to the ultrasophisticated.
Local server storage in the rack
You can simply fill racks with a mixture of servers and storage nodes, so that all storage traffic goes though the top-of-rack switch and stays off the network backbone. This means just a single low-latency switch hop between the server and its networked data. All the needed redundancy is in place, exclusive of remote replication. It's inexpensive because there's no added hardware.
A major drawback of the shared-rack approach is that hard disk drives (HDDs) need a lower inlet air temperature than servers do to function properly without premature failures. Servers without HDDs can operate on ambient air as high as 70 degrees Celsius (158 degrees Fahrenheit), so cooling them with 45 degrees Celsius (114 degrees Fahrenheit) fresh air is a cheap, zero-chill setup that saves a lot of power. HDDs will run at 60 degrees Celsius generally, but that 10 degrees Celsius cooling difference means different air management when HDDs are present.
This temperature requirement complicates the design for mixed racks. When data centers convert to all solid-state drives (SSDs) in a few years' time, the problem likely will disappear, since SSD will operate at higher temperatures than HDDs.
Converged or hyper-converged infrastructures are a way to get all the gear sourced and integrated by a single vendor. It's essentially the same as the mixed-rack idea, with the same cooling issue. You'll probably pay a premium for the integration and for support and spares in the future. When data centers consider converged infrastructure, they must balance convenience and integration against the realization that it is easy to change one vendor lock-in for another.
Networked storage on a VSAN
We can go back to the original DAS approach and add fast networking to the equation. Varieties of virtual storage area networks (VSANs) exist, but most function around a remote direct memory access (RDMA) or fast Ethernet deployment. Local storage services most of the server's needs, while the network addresses replicas to maintain data availability in the case of a server failure.
With current RDMA links at 56 gigabits per second for InfiniBand and (though non-standard) for Ethernet, the VSAN architecture delivers good performance. It has some unique drawbacks, however. First, servers must have storage. This may actually fit some cloud models, where local instance storage supports VM performance, but it forces the IT infrastructure to write a replica across the network to sustain availability. Writes are still slow; only reads speed up.
Speeding up reads could result in major gains, but one could add DRAM as a forward cache to achieve much of the same improvement without the complexity. There is a significant cost penalty when servers carry storage. There is also the issue of drive pricing, as server vendors charge more than distribution for storage components. Without drives, servers are much more compact and run cooler.
VSANS may also suffer because of different needs for servers and storage growth. Vendors tout adding nodes for uniform growth in storage and server power, but this often is not what a data center needs. It's generally better to decouple the capacity of servers and storage.
VSANs grew out of a desire to reduce storage costs relative to big storage arrays, where bulk SATA drives that cost $30 to manufacture end up at exorbitant prices for the IT shop. The need for in-server VSAN type architectures should fade away in favor of extra DRAM, alongside the trend to a common SSD replication model to simplify the storage architecture in the data center.
The ultrasophisticated approach to local server storage isn't widely available -- yet. Software-defined storage (SDS) abstracts data services from the storage nodes and runs them on virtual instances. The storage-specific hardware loses differentiation from one vendor to another, resulting in $30 per terabyte hard drives, for example.
The commoditized SDS approach, while a much lower cost resolution to the server-storage problem, isn't really available yet. SDS is still shaping up, but a concept where everything is a server except the drives themselves may be the end result, using iSCSI or another drive interface, non-volatile memory express over fabrics for example. With the transition to SSD well underway by the time we realize an SDS reality, the thermal issues of mixed servers and storage drives will likely be academic.
SSD adoption brings flash into the data center
Get started on software-defined storage decisions
Take the mystery out of software-defined data centers
Hyper-converged infrastructures provide storage in a bundle