No Arabic abstract
Conventional neuro-computing architectures and artificial neural networks have often been developed with no or loose connections to neuroscience. As a consequence, they have largely ignored key features of biological neural processing systems, such as their extremely low-power consumption features or their ability to carry out robust and efficient computation using massively parallel arrays of limited precision, highly variable, and unreliable components. Recent developments in nano-technologies are making available extremely compact and low-power, but also variable and unreliable solid-state devices that can potentially extend the offerings of availing CMOS technologies. In particular, memristors are regarded as a promising solution for modeling key features of biological synapses due to their nanoscale dimensions, their capacity to store multiple bits of information per element and the low energy required to write distinct states. In this paper, we first review the neuro- and neuromorphic-computing approaches that can best exploit the properties of memristor and-scale devices, and then propose a novel hybrid memristor-CMOS neuromorphic circuit which represents a radical departure from conventional neuro-computing approaches, as it uses memristors to directly emulate the biophysics and temporal dynamics of real synapses. We point out the differences between the use of memristors in conventional neuro-computing architectures and the hybrid memristor-CMOS circuit proposed, and argue how this circuit represents an ideal building block for implementing brain-inspired probabilistic computing paradigms that are robust to variability and fault-tolerant by design.
Neurons in the brain behave as non-linear oscillators, which develop rhythmic activity and interact to process information. Taking inspiration from this behavior to realize high density, low power neuromorphic computing will require huge numbers of nanoscale non-linear oscillators. Indeed, a simple estimation indicates that, in order to fit a hundred million oscillators organized in a two-dimensional array inside a chip the size of a thumb, their lateral dimensions must be smaller than one micrometer. However, despite multiple theoretical proposals, there is no proof of concept today of neuromorphic computing with nano-oscillators. Indeed, nanoscale devices tend to be noisy and to lack the stability required to process data in a reliable way. Here, we show experimentally that a nanoscale spintronic oscillator can achieve spoken digit recognition with accuracies similar to state of the art neural networks. We pinpoint the regime of magnetization dynamics leading to highest performance. These results, combined with the exceptional ability of these spintronic oscillators to interact together, their long lifetime, and low energy consumption, open the path to fast, parallel, on-chip computation based on networks of oscillators.
Brain-inspired neuromorphic computing which consist neurons and synapses, with an ability to perform complex information processing has unfolded a new paradigm of computing to overcome the von Neumann bottleneck. Electronic synaptic memristor devices which can compete with the biological synapses are indeed significant for neuromorphic computing. In this work, we demonstrate our efforts to develop and realize the graphene oxide (GO) based memristor device as a synaptic device, which mimic as a biological synapse. Indeed, this device exhibits the essential synaptic learning behavior including analog memory characteristics, potentiation and depression. Furthermore, spike-timing-dependent-plasticity learning rule is mimicked by engineering the pre- and post-synaptic spikes. In addition, non-volatile properties such as endurance, retentivity, multilevel switching of the device are explored. These results suggest that Ag/GO/FTO memristor device would indeed be a potential candidate for future neuromorphic computing applications. Keywords: RRAM, Graphene oxide, neuromorphic computing, synaptic device, potentiation, depression
Developing mixed-signal analog-digital neuromorphic circuits in advanced scaled processes poses significant design challenges. We present compact and energy efficient sub-threshold analog synapse and neuron circuits, optimized for a 28 nm FD-SOI process, to implement massively parallel large-scale neuromorphic computing systems. We describe the techniques used for maximizing density with mixed-mode analog/digital synaptic weight configurations, and the methods adopted for minimizing the effect of channel leakage current, in order to implement efficient analog computation based on pA-nA small currents. We present circuit simulation results, based on a new chip that has been recently taped out, to demonstrate how the circuits can be useful for both low-frequency operation in systems that need to interact with the environment in real-time, and for high-frequency operation for fast data processing in different types of spiking neural network architectures.
Neuromorphic computing takes inspiration from the brain to create energy efficient hardware for information processing, capable of highly sophisticated tasks. In this article, we make the case that building this new hardware necessitates reinventing electronics. We show that research in physics and material science will be key to create artificial nano-neurons and synapses, to connect them together in huge numbers, to organize them in complex systems, and to compute with them efficiently. We describe how some researchers choose to take inspiration from artificial intelligence to move forward in this direction, whereas others prefer taking inspiration from neuroscience, and we highlight recent striking results obtained with these two approaches. Finally, we discuss the challenges and perspectives in neuromorphic physics, which include developing the algorithms and the hardware hand in hand, making significant advances with small toy systems, as well as building large scale networks.
Reservoir computing (RC) offers efficient temporal data processing with a low training cost by separating recurrent neural networks into a fixed network with recurrent connections and a trainable linear network. The quality of the fixed network, called reservoir, is the most important factor that determines the performance of the RC system. In this paper, we investigate the influence of the hierarchical reservoir structure on the properties of the reservoir and the performance of the RC system. Analogous to deep neural networks, stacking sub-reservoirs in series is an efficient way to enhance the nonlinearity of data transformation to high-dimensional space and expand the diversity of temporal information captured by the reservoir. These deep reservoir systems offer better performance when compared to simply increasing the size of the reservoir or the number of sub-reservoirs. Low frequency components are mainly captured by the sub-reservoirs in later stage of the deep reservoir structure, similar to observations that more abstract information can be extracted by layers in the late stage of deep neural networks. When the total size of the reservoir is fixed, tradeoff between the number of sub-reservoirs and the size of each sub-reservoir needs to be carefully considered, due to the degraded ability of individual sub-reservoirs at small sizes. Improved performance of the deep reservoir structure alleviates the difficulty of implementing the RC system on hardware systems.