Software steals the show when it comes to tech innovation today, and it overshadows any improvements to hardware. But get ready for the future data center: Hardware transformation will happen with dynamic random access , or DRAM, a data center staple for more than 20 years. Looking ahead, there are various options that promise increased efficiency, persistence and lower cost.
To address some of these innovations, SearchDataCenter sat down with Danny Cobb, vice president for global technology strategy at Dell Technologies. Cobb has witnessed a lot of change through the years -- in his current role; at EMC, where he was a former CTO; and as a longtime technologist at Digital Equipment Corp. Cobb outlined various infrastructure technologies that will vie for IT pros' attention in future data center plans.
You have spoken publicly about the single-level cell to multi-level cell evolution in the data center. What technologies will be essential to the future data center?
Danny Cobb: There is this notion of using artificial intelligence (AI) and machine learning techniques to optimize the infrastructure in real time. We are actively involved in work that thinks of these compute models -- graphics processing units, tensor processing units, field-programmable gate arrays (FPGAs), etc. -- fundamentally as a service available on the fabric. You use machine learning and AI techniques to schedule workloads against the available resources in your data center. Three or four years ago, every single workload ran on this homogenous row upon row of homogenous virtualized x86. That's the homogenous computing world.
This world is heterogeneous computing. It is offload engines, it is accelerated AI, FPGAs being dynamically programmed in the data center. The infrastructure itself has to take on more knowledge, and we see the progression of that style of infrastructure and that style of computing and workloads in our platforms as they evolve.
Disaggregation and composable infrastructure seem to be the on-premises answer to cloud computing. What is its future in the data center?
Cobb: As an IT professional, the idea is to get the most jobs run and get the most value and process the most data per unit time and per unit cost on that infrastructure.
The very problem that converged infrastructure solved was that I could now buy an entire stack of IT that works together ... I can predict the performance of [that], and I understand the cost of [it] and my guys don't have to do that for me.
Now, I want to deploy these things in finer-grained, more consumable chunks of capacity. That took us to hyper-converged. Now, I can buy smaller units -- a single 1U server worth of stuff, put some management and orchestration capability around that to make the hardware manageable, and put a shared storage software stack on it and have a single, consolidated storage footprint that scales out.
Today, whether it is from Intel or AMD or other architectures, fundamentally, we have tightly coupled to processing via DDR [double data rate] -- that's a tough interface to break into if you want to pool and disaggregate . But there are examples in the industry and the technology roadmap that are getting us there. There is bus technology such as Gen Z, OpenCAPI and C6. That is one where we have begun to separate the traditional hierarchy from the processing model that will enable flexibility.
Technology like PCIe [PCI Express] has fundamentally been the I/O bus for so long and done such a great job at doubling bandwidth every two years and [cutting] latency [in half]. That's a great single system bus, but a terrible multisystem bus. It is not truly a fabric, and it does not have the ability to configure itself and tolerate having devices coming and going in real time like other fabric technologies. In the space of buses, that is where RDMA over Ethernet and the capabilities of using that as a intersystem fabric come into play. That also bleeds into some of those buses I mentioned before, whether it is C6 or Gen Z.
Those areas -- Remote Direct Access over Ethernet networks following Ethernet technology, 25 Gb to 100 Gb and the bus technology -- represent an entirely innovation surface for systems.
What emerging technology has everyone's attention?
Danny Cobbvice president of global technology strategy, Dell Technologies
Cobb: One that is top of mind is emerging memories. Imagine you have a cost-effective, very high-performance DRAM class that is persistent. How does that change every place you have an IoT [internet of things] sensor out there? If I can start to buffer that in a very low-cost, persistent device, now I have elements of persistent storage out on the edge, which today I really can't do. If I put flash out there, that's too slow. If I put DRAM out there, then I have to put a battery with it to keep it from losing state. This will enable a whole class of architecture that will be enabled by persistent living in all these dirt-cheap, fingernail-sized processing solutions that go out in all these IoT devices.
A true DRAM-replacement persistent -- that is the disruptive step. If we make it persistent, we start to change the way we write software. We don't write software to do POSIX reads and writes to a file system with a volume manager. Instead, I do loads and stores from a processor into , and that is my application. These -native or -centric workloads will start to accelerate in their adoption. We already see pieces of that today with the move to SAP HANA and in-memory data management applications that come from the transactional world into this world.
Those are largely evolutionary steps. The revolutionary step -- at least one as revolutionary as the move to multithreaded programming 20 years ago -- is this persistent model for applications. software and a programming language will be written for that.
Explore the pros, cons of software-defined
There's still room for hard disks in DRAM data centers
How to solve storage issues with CI