ﻻ يوجد ملخص باللغة العربية
An analog synapse circuit based on ferroelectric-metal field-effect transistors is proposed, that offers 6-bit weight precision. The circuit is comprised of volatile least significant bits (LSBs) used solely during training, and non-volatile most significant bits (MSBs) used for both training and inference. The design works at a 1.8V logic-compatible voltage, provides 10^10 endurance cycles, and requires only 250ps update pulses. A variant of LeNet trained with the proposed synapse achieves 98.2% accuracy on MNIST, which is only 0.4% lower than an ideal implementation of the same network with the same bit precision. Furthermore, the proposed synapse offers improvements of up to 26% in area, 44.8% in leakage power, 16.7% in LSB update pulse duration, and two orders of magnitude in endurance cycles, when compared to state-of-the-art hybrid synaptic circuits. Our proposed synapse can be extended to an 8-bit design, enabling a VGG-like network to achieve 88.8% accuracy on CIFAR-10 (only 0.8% lower than an ideal implementation of the same network).
This paper presents an implementation of multilayer feed forward neural networks (NN) to optimize CMOS analog circuits. For modeling and design recently neural network computational modules have got acceptance as an unorthodox and useful tool. To ach
The memristive crossbar aims to implement analog weighted neural network, however, the realistic implementation of such crossbar arrays is not possible due to limited switching states of memristive devices. In this work, we propose the design of an a
The development of memristive device technologies has reached a level of maturity to enable the design of complex and large-scale hybrid memristive-CMOS neural processing systems. These systems offer promising solutions for implementing novel in-memo
The superior density of passive analog-grade memristive crossbars may enable storing large synaptic weight matrices directly on specialized neuromorphic chips, thus avoiding costly off-chip communication. To ensure efficient use of such crossbars in
Training of deep neural networks (DNNs) is a computationally intensive task and requires massive volumes of data transfer. Performing these operations with the conventional von Neumann architectures creates unmanageable time and power costs. Recent s