ﻻ يوجد ملخص باللغة العربية
Conventional neuro-computing architectures and artificial neural networks have often been developed with no or loose connections to neuroscience. As a consequence, they have largely ignored key features of biological neural processing systems, such as their extremely low-power consumption features or their ability to carry out robust and efficient computation using massively parallel arrays of limited precision, highly variable, and unreliable components. Recent developments in nano-technologies are making available extremely compact and low-power, but also variable and unreliable solid-state devices that can potentially extend the offerings of availing CMOS technologies. In particular, memristors are regarded as a promising solution for modeling key features of biological synapses due to their nanoscale dimensions, their capacity to store multiple bits of information per element and the low energy required to write distinct states. In this paper, we first review the neuro- and neuromorphic-computing approaches that can best exploit the properties of memristor and-scale devices, and then propose a novel hybrid memristor-CMOS neuromorphic circuit which represents a radical departure from conventional neuro-computing approaches, as it uses memristors to directly emulate the biophysics and temporal dynamics of real synapses. We point out the differences between the use of memristors in conventional neuro-computing architectures and the hybrid memristor-CMOS circuit proposed, and argue how this circuit represents an ideal building block for implementing brain-inspired probabilistic computing paradigms that are robust to variability and fault-tolerant by design.
Neurons in the brain behave as non-linear oscillators, which develop rhythmic activity and interact to process information. Taking inspiration from this behavior to realize high density, low power neuromorphic computing will require huge numbers of n
Brain-inspired neuromorphic computing which consist neurons and synapses, with an ability to perform complex information processing has unfolded a new paradigm of computing to overcome the von Neumann bottleneck. Electronic synaptic memristor devices
Developing mixed-signal analog-digital neuromorphic circuits in advanced scaled processes poses significant design challenges. We present compact and energy efficient sub-threshold analog synapse and neuron circuits, optimized for a 28 nm FD-SOI proc
Neuromorphic computing takes inspiration from the brain to create energy efficient hardware for information processing, capable of highly sophisticated tasks. In this article, we make the case that building this new hardware necessitates reinventing
Reservoir computing (RC) offers efficient temporal data processing with a low training cost by separating recurrent neural networks into a fixed network with recurrent connections and a trainable linear network. The quality of the fixed network, call