ﻻ يوجد ملخص باللغة العربية
We propose a new network architecture for standard spin-Hall magnetic tunnel junction-based spintronic neurons that allows them to compute multiple critical convolutional neural network functionalities simultaneously and in parallel, saving space and time. An approximation to the Rectified Linear Unit transfer function and the local pooling function are computed simultaneously with the convolution operation itself. A proof-of-concept simulation is performed on the MNIST dataset, achieving up to 98% accuracy at a cost of less than 1 nJ for all convolution, activation and pooling operations combined. The simulations are remarkably robust to thermal noise, performing well even with very small magnetic layers.
We report the performance characteristics of a notional Convolutional Neural Network based on the previously-proposed Multiply-Accumulate-Activate-Pool set, an MTJ-based spintronic circuit made to compute multiple neural functionalities in parallel.
Traditional neural networks require enormous amounts of data to build their complex mappings during a slow training procedure that hinders their abilities for relearning and adapting to new data. Memory-augmented neural networks enhance neural networ
We propose a dedicated winner-take-all circuit to efficiently implement the intra-column competition between cells in Hierarchical Temporal Memory which is a crucial part of the algorithm. All inputs and outputs are charge-based for compatibility wit
In the last few years, deep learning has led to very good performance on a variety of problems, such as visual recognition, speech recognition and natural language processing. Among different types of deep neural networks, convolutional neural networ
Convolutional neural networks trained without supervision come close to matching performance with supervised pre-training, but sometimes at the cost of an even higher number of parameters. Extracting subnetworks from these large unsupervised convnets