spainter_vfx - stock.adobe.com

Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

HCI storage advancements require network technology upgrades

Flash storage options make HCI scalability easier, but they also require newer network hardware. Data center admins are looking at NVMe-based fabrics as a possible fix.

When flash storage technologies such as 3D NAND couple with new types of non-volatile memory, admins can scale hyper-converged infrastructure cluster capacity, improve performance and lower capital costs.

Flash technology supports the creation of large-capacity solid-state disk drives (SSDs) and replaces spinning hard disk drives in many HCI storage setups.

Large SSD drives, which often use flash memory to store and transfer data, aren't built for speed. These SSDs can store and delete data at larger volumes. Common hardware use cases include media production, big data processing and remotely collected data storage.

3D NAND flash storage sacrifices hardware performance per gigabyte in favor of significantly larger capacities, and it often acts as a read cache within HCI setups. These drives let admins construct high storage densities along the vertical axis at a lower cost per gigabyte. The main benefits of 3D NAND are lower storage latency and higher write performance.

There are some flaws with 3D NAND, however. Along with a high manufacturing cost, it also has increased error correction, wear leveling, potentially lower data retention rates and slower garbage collection.

High-speed storage requires new networking hardware

HCI cannot independently scale compute and storage components. Data- and storage-only nodes can solve this problem, but they can also lead to more data transfer between nodes, taxing the storage networking components.

This data increase means that HCI storage networking components are likely to become processing bottlenecks without the appropriate hardware. To overcome any network limitations, HCI configurations need a new type of high-speed network specification and storage hardware.

Combining HCI and NVMe isn't new, but more vendors are working to develop products for HCI and NVMe-oF integration.

Software-defined storage and HCI already share data across cluster nodes, but one method proposes a shared pool of dense storage nodes that can span HCI storage clusters using software-defined data center applications, virtual local area networks and high-speed networking hardware. This requires admins to evaluate their current hardware, as legacy hardware is not equipped for HCI network needs.

The additional storage traffic on the network and the speed at which non-volatile memory can transfer data will likely outpace standard networking offerings. Over time, the network wears down, which results in slower data transfer and decreased reliability. Most HCI storage configurations use TCP/IP to connect to networks, but this is not ideal because older networks aren't equipped to handle the data needs of high-volume SSD drives, such as 3D NAND.

To overcome network limitations, HCI setups need a new high-speed network, such as NVMe over Fabrics (NVMe-oF). This approach lets admins connect to storage via multiple specifications such as Fibre Channel, Ethernet and InfiniBand.

HCI needs NVMe's high throughput capacity to move into some workload-intensive sectors. Admins can achieve more scalability with NVMe because the hardware does not need centralized controllers to transfer data, increases the number of input/output queues and is written for flash architecture. With NVMe-oF, admins hope to gain these benefits within HCI.

Combining HCI and NVMe isn't new, but more vendors are working to develop products for HCI and NVMe-oF integration. Admins that want to implement NVME-oF must consider if they need third-party software to optimize write endurance and data processes. They must also evaluate how NVMe's reputation of poor shared storage performance will affect overall HCI processing speeds when placed in a fabric.

Dig Deeper on Converged infrastructure (CI)

Join the conversation

2 comments

Send me notifications when other members comment.

Please create a username to comment.

How do you address your hyper-converged storage needs and expansion?
Cancel
I think one of the best ways is to use an external accelerated bridge that connects SAS SSDs to many Ethernet or Fibre Channel connected nodes. Using off the shelf JBOD storage with SSDs installed connected to this accelerated bridge, you can do today what NVMe promises in the future!

Take multiple HCI nodes that can have a high speed Ethernet NIC or Fibre Channel HBA inside, then connect to the bridge through a fabric or directly, and then the bridge connects through to mass JBOD storage (note JBOD is cheap, and doesn't carry the Enterprise storage cost or feature set which is taken care of by HCI software). The bridge acts as a storage controller to assign raw drives to individual nodes that are seen as local drives, in this case you are using the Ethernet or Fibre Channel as a connection medium, not a SAN. The added benefit here is that the drives can be reassigned to a new server in the event of a node failure, so you have no downtime or risk while the new node is brought up.

Now the accelerated bridge gives you the ability to scale storage efficiently and inexpensively without buying new storage nodes with CPU/memory/storage/licensing. 

And now blade servers (low # of onboard storage) can effectively be used as mass compute for large scale HCI node installs with JBOD/JBOF storage.
Cancel

-ADS BY GOOGLE

SearchWindowsServer

SearchServerVirtualization

SearchCloudComputing

Close