Do you want to publish a course? Click here

Analog circuits for mixed-signal neuromorphic computing architectures in 28 nm FD-SOI technology

108   0   0.0 ( 0 )
 Added by Ning Qiao
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

Developing mixed-signal analog-digital neuromorphic circuits in advanced scaled processes poses significant design challenges. We present compact and energy efficient sub-threshold analog synapse and neuron circuits, optimized for a 28 nm FD-SOI process, to implement massively parallel large-scale neuromorphic computing systems. We describe the techniques used for maximizing density with mixed-mode analog/digital synaptic weight configurations, and the methods adopted for minimizing the effect of channel leakage current, in order to implement efficient analog computation based on pA-nA small currents. We present circuit simulation results, based on a new chip that has been recently taped out, to demonstrate how the circuits can be useful for both low-frequency operation in systems that need to interact with the environment in real-time, and for high-frequency operation for fast data processing in different types of spiking neural network architectures.



rate research

Read More

As processes continue to scale aggressively, the design of deep sub-micron, mixed-signal design is becoming more and more challenging. In this paper we present an analysis of scaling multi-core mixed-signal neuromorphic processors to advanced 28 nm FD-SOI nodes. We address analog design issues which arise from the use of advanced process, including the problem of large leakage currents and device mismatch, and asynchronous digital design issues. We present the outcome of Monte Carlo Analysis and circuit simulations of neuromorphic sub threshold analog/digital neuron circuits which reproduce biologically plausible responses. We describe the AER used to implement PCHB based asynchronous QDI routing processes in multi-core neuromorphic architectures and validate their operation via circuit simulation results. Finally we describe the implementation of custom 28 nm CAM based memory resources utilized in these multi-core neuromorphic processor and discuss the possibility of increasing density by using advanced RRAM devices integrated in the 28 nm Fully-Depleted Silicon on Insulator (FD-SOI) process.
The progress in neuromorphic computing is fueled by the development of novel nonvolatile memories capable of storing analog information and implementing neural computation efficiently. However, like most other analog circuits, these devices and circuits are prone to imperfections, such as temperature dependency, noise, tuning error, etc., often leading to considerable performance degradation in neural network implementations. Indeed, imperfections are major obstacles in the path of further progress and ultimate commercialization of these technologies. Hence, a practically viable approach should be developed to deal with these nonidealities and unleash the full potential of nonvolatile memories in neuromorphic systems. Here, for the first time, we report a comprehensive characterization of critical imperfections in two analog-grade memories, namely passively-integrated memristors and redesigned eFlash memories, which both feature long-term retention, high endurance, analog storage, low-power operation, and compact nano-scale footprint. Then, we propose a holistic approach that includes modifications in the training, tuning algorithm, memory state optimization, and circuit design to mitigate these imperfections. Our proposed methodology is corroborated on a hybrid software/experimental framework using two benchmarks: a moderate-size convolutional neural network and ResNet-18 trained on CIFAR-10 and ImageNet datasets, respectively. Our proposed approaches allow 2.5x to 9x improvements in the energy consumption of memory arrays during inference and sub-percent accuracy drop across 25-100 C temperature range. The defect tolerance is improved by >100x, and a sub-percent accuracy drop is demonstrated in deep neural networks built with 64x64 passive memristive crossbars featuring 25% normalized switching threshold variations.
A switched-capacitor (SC) neuromorphic system for closed-loop neural coupling in 28 nm CMOS is presented, occupying 600 um by 600 um. It offers 128 input channels (i.e. presynaptic terminals), 8192 synapses and 64 output channels (i.e. neurons). Biologically realistic neuron and synapse dynam- ics are achieved via a faithful translation of the behavioural equations to SC circuits. As leakage currents significantly affect circuit behaviour at this technology node, dedicated compensation techniques are employed to achieve biological-realtime operation, with faithful reproduction of time constants of several 100 ms at room temperature. Power draw of the overall system is 1.9 mW.
Conventional neuro-computing architectures and artificial neural networks have often been developed with no or loose connections to neuroscience. As a consequence, they have largely ignored key features of biological neural processing systems, such as their extremely low-power consumption features or their ability to carry out robust and efficient computation using massively parallel arrays of limited precision, highly variable, and unreliable components. Recent developments in nano-technologies are making available extremely compact and low-power, but also variable and unreliable solid-state devices that can potentially extend the offerings of availing CMOS technologies. In particular, memristors are regarded as a promising solution for modeling key features of biological synapses due to their nanoscale dimensions, their capacity to store multiple bits of information per element and the low energy required to write distinct states. In this paper, we first review the neuro- and neuromorphic-computing approaches that can best exploit the properties of memristor and-scale devices, and then propose a novel hybrid memristor-CMOS neuromorphic circuit which represents a radical departure from conventional neuro-computing approaches, as it uses memristors to directly emulate the biophysics and temporal dynamics of real synapses. We point out the differences between the use of memristors in conventional neuro-computing architectures and the hybrid memristor-CMOS circuit proposed, and argue how this circuit represents an ideal building block for implementing brain-inspired probabilistic computing paradigms that are robust to variability and fault-tolerant by design.
Neuromorphic computing, inspired by the brain, promises extreme efficiency for certain classes of learning tasks, such as classification and pattern recognition. The performance and power consumption of neuromorphic computing depends heavily on the choice of the neuron architecture. Digital neurons (Dig-N) are conventionally known to be accurate and efficient at high speed, while suffering from high leakage currents from a large number of transistors in a large design. On the other hand, analog/mixed-signal neurons are prone to noise, variability and mismatch, but can lead to extremely low-power designs. In this work, we will analyze, compare and contrast existing neuron architectures with a proposed mixed-signal neuron (MS-N) in terms of performance, power and noise, thereby demonstrating the applicability of the proposed mixed-signal neuron for achieving extreme energy-efficiency in neuromorphic computing. The proposed MS-N is implemented in 65 nm CMOS technology and exhibits > 100X better energy-efficiency across all frequencies over two traditional digital neurons synthesized in the same technology node. We also demonstrate that the inherent error-resiliency of a fully connected or even convolutional neural network (CNN) can handle the noise as well as the manufacturing non-idealities of the MS-N up to certain degrees. Notably, a system-level implementation on MNIST datasets exhibits a worst-case increase in classification error by 2.1% when the integrated noise power in the bandwidth is ~ 0.1 uV2, along with +-3{sigma} amount of variation and mismatch introduced in the transistor parameters for the proposed neuron with 8-bit precision.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا