ترغب بنشر مسار تعليمي؟ اضغط هنا

The advent of deep learning has considerably accelerated machine learning development. The deployment of deep neural networks at the edge is however limited by their high memory and energy consumption requirements. With new memory technology availabl e, emerging Binarized Neural Networks (BNNs) are promising to reduce the energy impact of the forthcoming machine learning hardware generation, enabling machine learning on the edge devices and avoiding data transfer over the network. In this work, after presenting our implementation employing a hybrid CMOS - hafnium oxide resistive memory technology, we suggest strategies to apply BNNs to biomedical signals such as electrocardiography and electroencephalography, keeping accuracy level and reducing memory requirements. We investigate the memory-accuracy trade-off when binarizing whole network and binarizing solely the classifier part. We also discuss how these results translate to the edge-oriented Mobilenet~V1 neural network on the Imagenet task. The final goal of this research is to enable smart autonomous healthcare devices.
One of the most exciting applications of Spin Torque Magnetoresistive Random Access Memory (ST-MRAM) is the in-memory implementation of deep neural networks, which could allow improving the energy efficiency of Artificial Intelligence by orders of ma gnitude with regards to its implementation on computers and graphics cards. In particular, ST-MRAM could be ideal for implementing Binarized Neural Networks (BNNs), a type of deep neural networks discovered in 2016, which can achieve state-of-the-art performance with a highly reduced memory footprint with regards to conventional artificial intelligence approaches. The challenge of ST-MRAM, however, is that it is prone to write errors and usually requires the use of error correction. In this work, we show that these bit errors can be tolerated by BNNs to an outstanding level, based on examples of image recognition tasks (MNIST, CIFAR-10 and ImageNet): bit error rates of ST-MRAM up to 0.1% have little impact on recognition accuracy. The requirements for ST-MRAM are therefore considerably relaxed for BNNs with regards to traditional applications. By consequence, we show that for BNNs, ST-MRAMs can be programmed with weak (low-energy) programming conditions, without error correcting codes. We show that this result can allow the use of low energy and low area ST-MRAM cells, and show that the energy savings at the system level can reach a factor two.
The brain performs intelligent tasks with extremely low energy consumption. This work takes inspiration from two strategies used by the brain to achieve this energy efficiency: the absence of separation between computing and memory functions, and the reliance on low precision computation. The emergence of resistive memory technologies indeed provides an opportunity to co-integrate tightly logic and memory in hardware. In parallel, the recently proposed concept of Binarized Neural Network, where multiplications are replaced by exclusive NOR (XNOR) logic gates, offers a way to implement artificial intelligence using very low precision computation. In this work, we therefore propose a strategy to implement low energy Binarized Neural Networks, which employs brain-inspired concepts, while retaining energy benefits from digital electronics. We design, fabricate and test a memory array, including periphery and sensing circuits, optimized for this in-memory computing scheme. Our circuit employs hafnium oxide resistive memory integrated in the back end of line of a 130 nanometer CMOS process, in a two transistors - two resistors cell, which allows performing the exclusive NOR operations of the neural network directly within the sense amplifiers. We show, based on extensive electrical measurements, that our design allows reducing the amount of bit errors on the synaptic weights, without the use of formal error correcting codes. We design a whole system using this memory array. We show on standard machine learning tasks (MNIST, CIFAR-10, ImageNet and an ECG task) that the system has an inherent resilience to bit errors. We evidence that its energy consumption is attractive compared to more standard approaches, and that it can use the memory devices in regimes where they exhibit particularly low programming energy and high endurance.
Binarized Neural Networks, a recently discovered class of neural networks with minimal memory requirements and no reliance on multiplication, are a fantastic opportunity for the realization of compact and energy efficient inference hardware. However, such neural networks are generally not entirely binarized: their first layer remains with fixed point input. In this work, we propose a stochastic computing version of Binarized Neural Networks, where the input is also binarized. Simulations on the example of the Fashion-MNIST and CIFAR-10 datasets show that such networks can approach the performance of conventional Binarized Neural Networks. We evidence that the training procedure should be adapted for use with stochastic computing. Finally, the ASIC implementation of our scheme is investigated, in a system that closely associates logic and memory, implemented by Spin Torque Magnetoresistive Random Access Memory. This analysis shows that the stochastic computing approach can allow considerable savings with regards to conventional Binarized Neural networks in terms of area (62% area reduction on the Fashion-MNIST task). It can also allow important savings in terms of energy consumption, if we accept reasonable reduction of accuracy: for example a factor 2.1 can be saved, with the cost of 1.4% in Fashion-MNIST test accuracy. These results highlight the high potential of Binarized Neural Networks for hardware implementation, and that adapting them to hardware constrains can provide important benefits.
Neural networks are currently transforming the field of computer algorithms, yet their emulation on current computing substrates is highly inefficient. Reservoir computing was successfully implemented on a large variety of substrates and gave new ins ight in overcoming this implementation bottleneck. Despite its success, the approach lags behind the state of the art in deep learning. We therefore extend time-delay reservoirs to deep networks and demonstrate that these conceptually correspond to deep convolutional neural networks. Convolution is intrinsically realized on a substrate level by generic drive-response properties of dynamical systems. The resulting novelty is avoiding vector-matrix products between layers, which cause low efficiency in todays substrates. Compared to singleton time-delay reservoirs, our deep network achieves accuracy improvements by at least an order of magnitude in Mackey-Glass and Lorenz timeseries prediction.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا