BACKGROUND IMAGE: ipopba/iStock

E-Handbook:

The latest on emerging memory technology

News Stay informed about the latest enterprise technology news and product updates.

Future data center systems await memory innovation

The search is on to replace DRAM in the future data center, one of many evolutions to come for data center hardware, one industry expert says.

Software steals the show when it comes to tech innovation today, and it overshadows any improvements to hardware....

But get ready for the future data center: Hardware transformation will happen with dynamic random access memory, or DRAM, a data center staple for more than 20 years. Looking ahead, there are various options that promise increased efficiency, persistence and lower cost.

To address some of these innovations, SearchDataCenter sat down with Danny Cobb, vice president for global technology strategy at Dell Technologies. Cobb has witnessed a lot of change through the years -- in his current role; at EMC, where he was a former CTO; and as a longtime technologist at Digital Equipment Corp. Cobb outlined various infrastructure technologies that will vie for IT pros' attention in future data center plans.

You have spoken publicly about the single-level cell to multi-level cell memory evolution in the data center. What technologies will be essential to the future data center?

Danny Cobb, Dell TechnologiesDanny Cobb

Danny Cobb: There is this notion of using artificial intelligence (AI) and machine learning techniques to optimize the infrastructure in real time. We are actively involved in work that thinks of these new compute models -- graphics processing units, tensor processing units, field-programmable gate arrays (FPGAs), etc. -- fundamentally as a service available on the fabric. You use machine learning and AI techniques to schedule workloads against the available resources in your data center. Three or four years ago, every single workload ran on this homogenous row upon row of homogenous virtualized x86. That's the homogenous computing world.

This new world is heterogeneous computing. It is offload engines, it is accelerated AI, FPGAs being dynamically programmed in the data center. The infrastructure itself has to take on more knowledge, and we see the progression of that style of infrastructure and that style of computing and workloads in our platforms as they evolve.

Disaggregation and composable infrastructure seem to be the on-premises answer to cloud computing. What is its future in the data center?

Cobb: As an IT professional, the idea is to get the most jobs run and get the most value and process the most data per unit time and per unit cost on that infrastructure.

The very first problem that converged infrastructure solved was that I could now buy an entire stack of IT that works together ... I can predict the performance of [that], and I understand the cost of [it] and my guys don't have to do that for me.

Now, I want to deploy these things in finer-grained, more consumable chunks of capacity. That took us to hyper-converged. Now, I can buy smaller units -- a single 1U server worth of stuff, put some management and orchestration capability around that to make the hardware manageable, and put a shared storage software stack on it and have a single, consolidated storage footprint that scales out.

Today, whether it is from Intel or AMD or other architectures, fundamentally, we have tightly coupled memory to processing via DDR [double data rate] -- that's a tough interface to break into if you want to pool and disaggregate memory. But there are examples in the industry and the technology roadmap that are getting us there. There is bus technology such as Gen Z, OpenCAPI and C6. That is one area where we have begun to separate the traditional memory hierarchy from the processing model that will enable flexibility.

Technology like PCIe [PCI Express] has fundamentally been the I/O bus for so long and done such a great job at doubling bandwidth every two years and [cutting] latency [in half]. That's a great single system bus, but a terrible multisystem bus. It is not truly a fabric, and it does not have the ability to configure itself and tolerate having devices coming and going in real time like other fabric technologies. In the space of new buses, that is where RDMA over Ethernet and the capabilities of using that as a new intersystem fabric come into play. That also bleeds into some of those memory buses I mentioned before, whether it is C6 or Gen Z.

Those areas -- Remote Direct Memory Access over Ethernet networks following Ethernet technology, 25 Gb to 100 Gb and the new memory bus technology -- represent an entirely new innovation surface for systems.

What emerging technology has everyone's attention?

Imagine you have a cost-effective, very high-performance DRAM class memory that is persistent. How does that change every place you have an IoT sensor out there?
Danny Cobbvice president of global technology strategy, Dell Technologies

Cobb: One that is top of mind is emerging memories. Imagine you have a cost-effective, very high-performance DRAM class memory that is persistent. How does that change every place you have an IoT [internet of things] sensor out there? If I can start to buffer that in a very low-cost, persistent device, now I have elements of persistent storage out on the edge, which today I really can't do. If I put flash out there, that's too slow. If I put DRAM out there, then I have to put a battery with it to keep it from losing state. This will enable a whole new class of architecture that will be enabled by persistent memory living in all these dirt-cheap, fingernail-sized processing solutions that go out in all these IoT devices.

A true DRAM-replacement persistent memory -- that is the disruptive step. If we make it persistent, we start to change the way we write software. We don't write software to do POSIX reads and writes to a file system with a volume manager. Instead, I do loads and stores from a processor into memory, and that is my application. These memory-native or memory-centric workloads will start to accelerate in their adoption. We already see pieces of that today with the move to SAP HANA and in-memory data management applications that come from the transactional world into this new world.

Those are largely evolutionary steps. The revolutionary step -- at least one as revolutionary as the move to multithreaded programming 20 years ago -- is this persistent memory model for applications. New software and a new programming language will be written for that.

Next Steps

Explore the pros, cons of software-defined memory

There's still room for hard disks in DRAM data centers

How to solve storage issues with CI

Dig Deeper on Enterprise data storage strategies

PRO+

Content

Find more PRO+ content and other member only offers, here.

Join the conversation

1 comment

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

How will technologies that will be the future of DRAM help you run a more efficient data center?
Cancel

-ADS BY GOOGLE

SearchWindowsServer

SearchServerVirtualization

SearchCloudComputing

Close