ترغب بنشر مسار تعليمي؟ اضغط هنا

SHE-MTJ Circuits for Convolutional Neural Networks

94   0   0.0 ( 0 )
 نشر من قبل Andrew Stephan
 تاريخ النشر 2020
والبحث باللغة English




اسأل ChatGPT حول البحث

We report the performance characteristics of a notional Convolutional Neural Network based on the previously-proposed Multiply-Accumulate-Activate-Pool set, an MTJ-based spintronic circuit made to compute multiple neural functionalities in parallel. A study of image classification with the MNIST handwritten digits dataset using this network is provided via simulation. The effect of changing the weight representation precision, the severity of device process variation within the MAAP sets and the computational redundancy are provided. The emulated network achieves between 90 and 95% image classification accuracy at a cost of ~100 nJ per image.



قيم البحث

اقرأ أيضاً

We propose a new network architecture for standard spin-Hall magnetic tunnel junction-based spintronic neurons that allows them to compute multiple critical convolutional neural network functionalities simultaneously and in parallel, saving space and time. An approximation to the Rectified Linear Unit transfer function and the local pooling function are computed simultaneously with the convolution operation itself. A proof-of-concept simulation is performed on the MNIST dataset, achieving up to 98% accuracy at a cost of less than 1 nJ for all convolution, activation and pooling operations combined. The simulations are remarkably robust to thermal noise, performing well even with very small magnetic layers.
Convolutional Neural Networks (CNNs) are a class of Artificial Neural Networks(ANNs) that employ the method of convolving input images with filter-kernels for object recognition and classification purposes. In this paper, we propose a photonics circu it architecture which could consume a fraction of energy per inference compared with state of the art electronics.
Neural networks are currently transforming the field of computer algorithms, yet their emulation on current computing substrates is highly inefficient. Reservoir computing was successfully implemented on a large variety of substrates and gave new ins ight in overcoming this implementation bottleneck. Despite its success, the approach lags behind the state of the art in deep learning. We therefore extend time-delay reservoirs to deep networks and demonstrate that these conceptually correspond to deep convolutional neural networks. Convolution is intrinsically realized on a substrate level by generic drive-response properties of dynamical systems. The resulting novelty is avoiding vector-matrix products between layers, which cause low efficiency in todays substrates. Compared to singleton time-delay reservoirs, our deep network achieves accuracy improvements by at least an order of magnitude in Mackey-Glass and Lorenz timeseries prediction.
Undersampling the k-space data is widely adopted for acceleration of Magnetic Resonance Imaging (MRI). Current deep learning based approaches for supervised learning of MRI image reconstruction employ real-valued operations and representations by tre ating complex valued k-space/spatial-space as real values. In this paper, we propose complex dense fully convolutional neural network ($mathbb{C}$DFNet) for learning to de-alias the reconstruction artifacts within undersampled MRI images. We fashioned a densely-connected fully convolutional block tailored for complex-valued inputs by introducing dedicated layers such as complex convolution, batch normalization, non-linearities etc. $mathbb{C}$DFNet leverages the inherently complex-valued nature of input k-space and learns richer representations. We demonstrate improved perceptual quality and recovery of anatomical structures through $mathbb{C}$DFNet in contrast to its real-valued counterparts.
This paper presents a physics-based modeling framework for the analysis and transient simulation of circuits containing Spin-Transfer Torque (STT) Magnetic Tunnel Junction (MTJ) devices. The framework provides the tools to analyze the stochastic beha vior of MTJs and to generate Verilog-A compact models for their simulation in large VLSI designs, addressing the need for an industry-ready model accounting for real-world reliability and scalability requirements. Device dynamics are described by the Landau-Lifshitz-Gilbert-Slonczewsky (s-LLGS ) stochastic magnetization considering Voltage-Controlled Magnetic Anisotropy (VCMA) and the non-negligible statistical effects caused by thermal noise. Model behavior is validated against the OOMMF magnetic simulator and its performance is characterized on a 1-Mb 28 nm Magnetoresistive-RAM (MRAM) memory product.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا