ﻻ يوجد ملخص باللغة العربية
Deep neural networks are a biologically-inspired class of algorithms that have recently demonstrated state-of-the-art accuracies involving large-scale classification and recognition tasks. Indeed, a major landmark that enables efficient hardware accelerators for deep networks is the recent advances from the machine learning community that have demonstrated aggressively scaled deep binary networks with state-of-the-art accuracies. In this paper, we demonstrate how deep binary networks can be accelerated in modified von-Neumann machines by enabling binary convolutions within the SRAM array. In general, binary convolutions consist of bit-wise XNOR followed by a population-count (popcount). We present a charge sharing XNOR and popcount operation in 10 transistor SRAM cells. We have employed multiple circuit techniques including dual-read-worldines (Dual-RWL) along with a dual-stage ADC that overcomes the inaccuracies of a low precision ADC, to achieve a fairly accurate popcount. In addition, a key highlight of the present work is the fact that we propose sectioning of the SRAM array by adding switches onto the read-bitlines, thereby achieving improved parallelism. This is beneficial for deep networks, where the kernel size grows and requires to be stored in multiple sub-banks. As such, one needs to evaluate the partial popcount from multiple sub-banks and sum them up for achieving the final popcount. For n-sections per sub-array, we can perform n convolutions within one particular sub-bank, thereby improving overall system throughput as well as the energy efficiency. Our results at the array level show that the energy consumption and delay per-operation was 1.914pJ and 45ns, respectively. Moreover, an energy improvement of 2.5x, and a performance improvement of 4x was achieved by using the proposed sectioned-SRAM, compared to a non-sectioned SRAM design.
One of the most exciting applications of Spin Torque Magnetoresistive Random Access Memory (ST-MRAM) is the in-memory implementation of deep neural networks, which could allow improving the energy efficiency of Artificial Intelligence by orders of ma
Neural networks span a wide range of applications of industrial and commercial significance. Binary neural networks (BNN) are particularly effective in trading accuracy for performance, energy efficiency or hardware/software complexity. Here, we intr
Resistive random access memories (RRAM) are novel nonvolatile memory technologies, which can be embedded at the core of CMOS, and which could be ideal for the in-memory implementation of deep neural networks. A particularly exciting vision is using t
The design of systems implementing low precision neural networks with emerging memories such as resistive random access memory (RRAM) is a major lead for reducing the energy consumption of artificial intelligence (AI). Multiple works have for example
Traditional neural networks require enormous amounts of data to build their complex mappings during a slow training procedure that hinders their abilities for relearning and adapting to new data. Memory-augmented neural networks enhance neural networ