Victoria - Fotolia

News Stay informed about the latest enterprise technology news and product updates.

Memristor technology brings about an analog revolution

Are we ready for memristor-based artificially intelligent infrastructure in the enterprise data center?

This article can also be found in the Premium Editorial Download: Modern Infrastructure: Data center water use grows:

We are always driven to try to do smarter things faster. It's human nature. In our data centers, we layer machine learning algorithms over big and fast data streams to create that special competitive business edge (or greater social benefit!).

Yet for all its processing power, performance and capacity, today's digital-based computing and storage can't compare to what goes on inside each of our very own, very analog brains, which vastly outstrip digital architectures by six, seven or even eight orders of magnitude. If we want to compute at biological scales and speeds, we must take advantage of new forms of hardware that transcend the strictly digital.

Many applications of machine learning are based on examining data's inherent patterns and behavior, and then using that intelligence to classify what we know, predict what comes next, and identify abnormalities. This isn't terribly different from our own neurons and synapses, which learn from incoming streams of signals, store that learning, and allow it to be used "forward" to make more intelligent decisions (or take actions). In the last 30 years, AI practitioners have built practical neural nets and other types of machine learning algorithms for various applications, but they are all bound today by the limitations of digital scale (an exponentially growing Web of interconnections is but one facet of scale) and speed.

Today's digital computing infrastructure, based on switching digital bits, faces some big hurdles to keep up with Moore's Law. Even if there are a couple of magnitudes of improvement yet to be squeezed out of the traditional digital design paradigm, there are inherent limits in power consumption, scale and speed. Whether we're evolving artificial intelligence into humanoid robots or more practically scaling machine learning to ever-larger big data sets to better target the advertising budget, there simply isn't enough raw power available to reach biological scale and density with traditional computing infrastructure.

Ultimately, power is the real shortcoming. "Message passing" or communicating a signal (data) back and forth between components is one of the key wastrels. At the fundamental level of digital design, an awful lot of IO between CPU's and everything else must happen for even the smallest of data processing tasks. Even as we increase densities, forge smaller chips or add flash closer to the CPU, it will still take significant energy and time to move bits around the digital architecture. In our brains, memory, storage and processing are all intimately converged.

Unlike digital systems, we don't need megawatts of power to get out of bed in the morning because our brains run a low power analog-based architecture. Analog circuitry, if custom built for the problem at hand, gets to the point at the speed of light directly, rather than requiring a large number of instruction cycles. And with continuously valued output, it could calculate with arbitrary precision. Further, if persistent storage is inherent in the circuit versus stored digitally as bits on some remote device, there would also not be any staggering IO waits.

Say hello to memristor tech

Of course silicon devices are fundamentally analog, but we've built them up into complexly connected digital logic gates and bit storage. But what if we could go "back to the future" and design silicon for analog computing circuitry at today's silicon chip level densities? The new breakthrough is exploiting the analog properties of the emerging new class of memristor devices.

A memristor is a device that can change its internal resistance based on electrical signals fed into it -- and that persistent resistance can be measured and used as non-volatile memory. A memristors is a fast silicon device like DRAM -- at least 10 times faster than NAND-based NVRAM (flash) and so it can be used as main memory. HP for one has been researching newer memristror technologies for persistent digital memory, but has not yet quite been able to bring this to market. If someone can, it could possibly usher in a whole next generation of digital computing architectures that converge storage and memory.

But now we've seen at least one startup, Knowm Inc., pioneering a brilliant new form of computing that leverages memristor technology to not only persist data in fast memory, but to inherently calculate serious compute functions in one operation that would otherwise require the stored data to be offloaded into CPUs, processed, and written back. Knowm claims to leverage the analog properties of small memristor circuits -- a "synapse" that comes with an inherent adaptive learning capability. Feed it a signal and it can directly learn -- and at the same time persistently store -- the pattern it finds in that signal.

Theoretically, by building up from this basic functional unit, pretty much any machine learning algorithm could be tremendously accelerated. While Knowm is in its early days, it already offers a full stack of technologies - discrete working synapse chips to play with, scalable simulators, defined low-level APIs and higher-level machine learning libraries, plus a service that can help layer large quantities of its synapses directly onto existing CMOS (back end of line) designs.

With apologies to AI buffs and Terminator aficionados, the Tanjea Group's team thinks the opportunity for disruption is much larger than machine learning acceleration. A new hardware design, what Knowm has termed a Neural Processing Unit, that intelligently harnesses analog hardware functions for extremely fast, low-power, dense and storage-converged computing would represent a truly significant change and turning point for the whole computing industry. Whoever takes advantage of this type of computing solution first will potentially cause a massively disruptive shift in not just machine learning, but in how all computing is done.

Mike Matchett is a senior analyst and consultant at Taneja Group.

This was last published in September 2015

Dig Deeper on Converged infrastructure (CI)

PRO+

Content

Find more PRO+ content and other member only offers, here.

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchWindowsServer

SearchServerVirtualization

SearchCloudComputing

Close