ﻻ يوجد ملخص باللغة العربية
Memristor based neural networks have great potentials in on-chip neuromorphic computing systems due to the fast computation and low-energy consumption. However, the imprecise properties of existing memristor devices generally result in catastrophic failures for the network in-situ training, which significantly impedes their engineering applications. In this work, we design a novel learning scheme that integrates stochastic sparse updating with momentum adaption (SSM) to efficiently train the imprecise memristor networks with high classification accuracy. The SSM scheme consists of: (1) a stochastic and discrete learning method to make weight updates sparse; (2) a momentum based gradient algorithm to eliminate training noises and distill robust updates; (3) a network re-initialization method to mitigate the device-to-device variation; (4) an update compensation strategy to further stabilize the weight programming process. With the SSM scheme, experiments show that the classification accuracy on multilayer perceptron (MLP) and convolutional neural network (CNN) improves from 26.12% to 90.07% and from 65.98% to 92.38%, respectively. Meanwhile, the total numbers of weight updating pulses decrease 90% and 40% in MLP and CNN, respectively, and the convergence rates are both 3x faster. The SSM scheme provides a high-accuracy, low-power, and fast-convergence solution for the in-situ training of imprecise memristor networks, which is crucial to future neuromorphic intelligence systems.
We describe a hybrid analog-digital computing approach to solve important combinatorial optimization problems that leverages memristors (two-terminal nonvolatile memories). While previous memristor accelerators have had to minimize analog noise effec
Recent breakthroughs in recurrent deep neural networks with long short-term memory (LSTM) units has led to major advances in artificial intelligence. State-of-the-art LSTM models with significantly increased complexity and a large number of parameter
Memristor crossbars are circuits capable of performing analog matrix-vector multiplications, overcoming the fundamental energy efficiency limitations of digital logic. They have been shown to be effective in special-purpose accelerators for a limited
In this reply, we will provide our impersonal, point-to-point responses to the major criticisms (in bold and underlined) in arXiv:1909.12464. Firstly, we will identify a number of (imperceptibly hidden) mistakes in the Comment in understanding/interp
Memristors have recently received significant attention as ubiquitous device-level components for building a novel generation of computing systems. These devices have many promising features, such as non-volatility, low power consumption, high densit