ترغب بنشر مسار تعليمي؟ اضغط هنا

Scaling mixed-signal neuromorphic processors to 28 nm FD-SOI technologies

304   0   0.0 ( 0 )
 نشر من قبل Ning Qiao
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

As processes continue to scale aggressively, the design of deep sub-micron, mixed-signal design is becoming more and more challenging. In this paper we present an analysis of scaling multi-core mixed-signal neuromorphic processors to advanced 28 nm FD-SOI nodes. We address analog design issues which arise from the use of advanced process, including the problem of large leakage currents and device mismatch, and asynchronous digital design issues. We present the outcome of Monte Carlo Analysis and circuit simulations of neuromorphic sub threshold analog/digital neuron circuits which reproduce biologically plausible responses. We describe the AER used to implement PCHB based asynchronous QDI routing processes in multi-core neuromorphic architectures and validate their operation via circuit simulation results. Finally we describe the implementation of custom 28 nm CAM based memory resources utilized in these multi-core neuromorphic processor and discuss the possibility of increasing density by using advanced RRAM devices integrated in the 28 nm Fully-Depleted Silicon on Insulator (FD-SOI) process.



قيم البحث

اقرأ أيضاً

Developing mixed-signal analog-digital neuromorphic circuits in advanced scaled processes poses significant design challenges. We present compact and energy efficient sub-threshold analog synapse and neuron circuits, optimized for a 28 nm FD-SOI proc ess, to implement massively parallel large-scale neuromorphic computing systems. We describe the techniques used for maximizing density with mixed-mode analog/digital synaptic weight configurations, and the methods adopted for minimizing the effect of channel leakage current, in order to implement efficient analog computation based on pA-nA small currents. We present circuit simulation results, based on a new chip that has been recently taped out, to demonstrate how the circuits can be useful for both low-frequency operation in systems that need to interact with the environment in real-time, and for high-frequency operation for fast data processing in different types of spiking neural network architectures.
The progress in neuromorphic computing is fueled by the development of novel nonvolatile memories capable of storing analog information and implementing neural computation efficiently. However, like most other analog circuits, these devices and circu its are prone to imperfections, such as temperature dependency, noise, tuning error, etc., often leading to considerable performance degradation in neural network implementations. Indeed, imperfections are major obstacles in the path of further progress and ultimate commercialization of these technologies. Hence, a practically viable approach should be developed to deal with these nonidealities and unleash the full potential of nonvolatile memories in neuromorphic systems. Here, for the first time, we report a comprehensive characterization of critical imperfections in two analog-grade memories, namely passively-integrated memristors and redesigned eFlash memories, which both feature long-term retention, high endurance, analog storage, low-power operation, and compact nano-scale footprint. Then, we propose a holistic approach that includes modifications in the training, tuning algorithm, memory state optimization, and circuit design to mitigate these imperfections. Our proposed methodology is corroborated on a hybrid software/experimental framework using two benchmarks: a moderate-size convolutional neural network and ResNet-18 trained on CIFAR-10 and ImageNet datasets, respectively. Our proposed approaches allow 2.5x to 9x improvements in the energy consumption of memory arrays during inference and sub-percent accuracy drop across 25-100 C temperature range. The defect tolerance is improved by >100x, and a sub-percent accuracy drop is demonstrated in deep neural networks built with 64x64 passive memristive crossbars featuring 25% normalized switching threshold variations.
A switched-capacitor (SC) neuromorphic system for closed-loop neural coupling in 28 nm CMOS is presented, occupying 600 um by 600 um. It offers 128 input channels (i.e. presynaptic terminals), 8192 synapses and 64 output channels (i.e. neurons). Biol ogically realistic neuron and synapse dynam- ics are achieved via a faithful translation of the behavioural equations to SC circuits. As leakage currents significantly affect circuit behaviour at this technology node, dedicated compensation techniques are employed to achieve biological-realtime operation, with faithful reproduction of time constants of several 100 ms at room temperature. Power draw of the overall system is 1.9 mW.
This work presents the design and analysis of a mixed-signal neuron (MS-N) for convolutional neural networks (CNN) and compares its performance with a digital neuron (Dig-N) in terms of operating frequency, power and noise. The circuit-level implemen tation of the MS-N in 65 nm CMOS technology exhibits 2-3 orders of magnitude better energy-efficiency over Dig-N for neuromorphic computing applications - especially at low frequencies due to the high leakage currents from many transistors in Dig-N. The inherent error-resiliency of CNN is exploited to handle the thermal and flicker noise of MS-N. A system-level analysis using a cohesive circuit-algorithmic framework on MNIST and CIFAR-10 datasets demonstrate an increase of 3% in worst-case classification error for MNIST when the integrated noise power in the bandwidth is ~ 1 {mu}V2.
Neuromorphic computing, inspired by the brain, promises extreme efficiency for certain classes of learning tasks, such as classification and pattern recognition. The performance and power consumption of neuromorphic computing depends heavily on the c hoice of the neuron architecture. Digital neurons (Dig-N) are conventionally known to be accurate and efficient at high speed, while suffering from high leakage currents from a large number of transistors in a large design. On the other hand, analog/mixed-signal neurons are prone to noise, variability and mismatch, but can lead to extremely low-power designs. In this work, we will analyze, compare and contrast existing neuron architectures with a proposed mixed-signal neuron (MS-N) in terms of performance, power and noise, thereby demonstrating the applicability of the proposed mixed-signal neuron for achieving extreme energy-efficiency in neuromorphic computing. The proposed MS-N is implemented in 65 nm CMOS technology and exhibits > 100X better energy-efficiency across all frequencies over two traditional digital neurons synthesized in the same technology node. We also demonstrate that the inherent error-resiliency of a fully connected or even convolutional neural network (CNN) can handle the noise as well as the manufacturing non-idealities of the MS-N up to certain degrees. Notably, a system-level implementation on MNIST datasets exhibits a worst-case increase in classification error by 2.1% when the integrated noise power in the bandwidth is ~ 0.1 uV2, along with +-3{sigma} amount of variation and mismatch introduced in the transistor parameters for the proposed neuron with 8-bit precision.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا