This article can also be found in the Premium Editorial Download "Modern Infrastructure: Understanding infrastructure and platform as a service."
Download it now to read this article plus other related content.
Part 2: Emerging all-SSD arrays deliver performance boost to data centers
In today's data center, storage technology can easily lag behind other, faster systems. A 35% storage utilization rate -- as measured by the storage consumed, not allocated -- isn't uncommon. IT managers faced with that number will likely conclude that they have a problem with underutilized capacity. And they're right. On a multimillion dollar array, this can be a million-dollar problem.
Should these organizations simply pack more data into the space available? Maybe not.
The root cause of this capacity inefficiency may be performance. In the last 10 years, while compute power and network speeds have shot up, hard disk drive (HDD) technology has lagged behind. With the advent of the solid-state drive (SSD), performance in the storage arena has gotten a significant boost to reduce the bottleneck when you add an all-SSD array to an HDD array (Table 1).
The all-SSD array marketplace
The all-SSD array market is just getting really interesting. Like most new market segments, this one features independent, emerging vendors going toe-to-toe with established vendors where the battle is over best-in-class functionality more than vendor name or reputation. Nimbus Data Systems and SolidFire are emerging vendors, for example, while Hewlett-Packard Co. and EMC Corp. are two of the established vendors competing in this rapidly expanding category.
Nimbus was one of the early evangelists for all-SSD arrays. Its E-class arrays can have up to 500 TB in total capacity. Nimbus claims to deliver 800,000 IOPS (I/O operations per second) in a 2U enclosure with an 80% reduction in power and cooling compared to the same HDD capacity.
Solid Fire touts its guaranteed quality of service (QoS) as a key differentiator. In this case, QoS is described as accurately allocating the right capacity for a specific application. This means striping the data across all available SSDs in the system. Basically, the company recommends provisioning based on required IOPS, not capacity. Based on the required minimum, maximum and burst IOPS for an application, the allocation will be made to meet the minimum service-level agreement (SLA).
Among established vendors, HP is taking a fully integrated approach to its SSD product strategy. That is, HP 3PAR StoreServ 10000 systems can be configured for all-SSD with up to about 100 TB. The 10000 can be further provisioned with HDDs up to a capacity of 2.2 petabytes (PB). The advantage that HP brings is that the all-SSD configuration is simply an extension of the 3PAR product line with all of the associated features, functionality and manageability.
EMC has taken a different route to market with its acquisition of XtremIO in 2012. The XtremeIO array architecture was designed for solid-state from the ground up, including the controllers. The company claims that the storage can be deployed in minutes without tuning or striping. XtremeIO arrays are currently in limited availability with general availability scheduled for later in 2013.
Features and functions of all-SSD arrays
Organizations will find that adding all-SSD products to the data center can be done non-disruptively, since the concept of the storage deployment is very similar to that of an all-HDD product. It has the same basic logical unit number (LUN), RAID and other considerations. Most vendors will support typical RAID levels for SSD.
Vendors will recommend a certain amount of space reservation for "garbage collection," a process needed after a certain number of write operations. This is because SSD does not overwrite blocks directly. Rather, it writes to an available block and later erases the obsolete block for reuse. Each vendor will have different specific recommendations on RAID and garbage collection overhead, but figure 20% to 30% on most systems (though XtremeIO claims that its method of garbage collection at the controller level reduces that overhead reserve).
Other common features in SSD arrays are ones that are currently "table stakes" in the larger array markets. These include thin provisioning, deduplication and the like.
Management varies with all-SSD products. HP 3PAR is managed by the 3PAR management software suite. Other products will be more system-specific, such as Nimbus's HALO software. HALO is a combination OS and management suite for monitoring and managing the arrays. HALO does not currently interface directly with higher-order management consoles such as Tivoli or HP OpenView, but the company does provide a REST application programming interface (API).
Deploying all-SSD arrays
Most organizations will get started in all-SSD at the point of new application deployment. A typical configuration is 20 to 30 TB, so in the scheme of a data center these are not enormous deployments. Most organizations can expect to have no more than 10% of their total capacity in SSD, as the fact remains that most data does not require flash-level performance.
One of the big dings to SSD technology is that it does predictably wear out. The individual storage cells are good for only so many writes before they are no longer usable. Gradually, the capacity of the drive diminishes to the point that it must be replaced. In the case of consumer-grade SSD, known as multi-level cell (MLC) technology and typically found in PCs, the cell life is only about 10,000 write operations. Enterprise-class SSD, known as single-level cell (SLC) technology, is good for about 100,000 write operations.
Application data demands are only increasing, which points to a future where all-SSD is a standard part of the data center architecture.
This was first published in June 2013