ﻻ يوجد ملخص باللغة العربية
We describe a hybrid analog-digital computing approach to solve important combinatorial optimization problems that leverages memristors (two-terminal nonvolatile memories). While previous memristor accelerators have had to minimize analog noise effects, we show that our optimization solver harnesses such noise as a computing resource. Here we describe a memristor-Hopfield Neural Network (mem-HNN) with massively parallel operations performed in a dense crossbar array. We provide experimental demonstrations solving NP-hard max-cut problems directly in analog crossbar arrays, and supplement this with experimentally-grounded simulations to explore scalability with problem size, providing the success probabilities, time and energy to solution, and interactions with intrinsic analog noise. Compared to fully digital approaches, and present-day quantum and optical accelerators, we forecast the mem-HNN to have over four orders of magnitude higher solution throughput per power consumption. This suggests substantially improved performance and scalability compared to current quantum annealing approaches, while operating at room temperature and taking advantage of existing CMOS technology augmented with emerging analog non-volatile memristors.
Memristors have recently received significant attention as ubiquitous device-level components for building a novel generation of computing systems. These devices have many promising features, such as non-volatility, low power consumption, high densit
Machine Learning (ML) can help solve combinatorial optimization (CO) problems better. A popular approach is to use a neural net to compute on the parameters of a given CO problem and extract useful information that guides the search for good solution
Recent breakthroughs in recurrent deep neural networks with long short-term memory (LSTM) units has led to major advances in artificial intelligence. State-of-the-art LSTM models with significantly increased complexity and a large number of parameter
Adding noises to artificial neural network(ANN) has been shown to be able to improve robustness in previous work. In this work, we propose a new technique to compute the pathwise stochastic gradient estimate with respect to the standard deviation of
We study and analyze the fundamental aspects of noise propagation in recurrent as well as deep, multi-layer networks. The main focus of our study are neural networks in analogue hardware, yet the methodology provides insight for networks in general.