ترغب بنشر مسار تعليمي؟ اضغط هنا

SPARE: Spiking Networks Acceleration Using CMOS ROM-Embedded RAM as an In-Memory-Computation Primitive

58   0   0.0 ( 0 )
 نشر من قبل Amogh Agrawal
 تاريخ النشر 2017
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Despite huge success of artificial intelligence, hardware systems running these algorithms consume orders of magnitude higher energy compared to the human brain, mainly due to heavy data movements between the memory unit and the computation cores. Spiking neural networks (SNNs) built using bio-plausible neuron and synaptic models have emerged as the power-efficient choice for designing cognitive applications. These algorithms involve several lookup-table (LUT) based function evaluations such as high-order polynomials and transcendental functions for solving complex neuro-synaptic models, that typically require additional storage. To that effect, we propose `SPARE - an in-memory, distributed processing architecture built on ROM-embedded RAM technology, for accelerating SNNs. ROM-embedded RAMs allow storage of LUTs, embedded within a typical memory array, without additional area overhead. Our proposed architecture consists of a 2-D array of Processing Elements (PEs). Since most of the computations are done locally within each PE, unnecessary data transfers are restricted, thereby alleviating the von-Neumann bottleneck. We evaluate SPARE for two different ROM-Embedded RAM structures - CMOS based ROM-Embedded SRAMs (R-SRAMs) and STT-MRAM based ROM-Embedded MRAMs (R-MRAMs). Moreover, we analyze trade-offs in terms of energy, area and performance, for using the two technologies on a range of image classification benchmarks. Furthermore, we leverage the additional storage density to implement complex neuro-synaptic functionalities. This enhances the utility of the proposed architecture by provisioning implementation of any neuron/synaptic behavior as necessitated by the application. Our results show up-to 1.75x, 1.95x and 1.95x improvement in energy, iso-storage area, and iso-area performance, respectively, by using neural network accelerators built on ROM-embedded RAM primitives.



قيم البحث

اقرأ أيضاً

Spiking Neural Networks (SNNs) offer an event-driven and more biologically realistic alternative to standard Artificial Neural Networks based on analog information processing. This can potentially enable energy-efficient hardware implementations of n euromorphic systems which emulate the functional units of the brain, namely, neurons and synapses. Recent demonstrations of ultra-fast photonic computing devices based on phase-change materials (PCMs) show promise of addressing limitations of electrically driven neuromorphic systems. However, scaling these standalone computing devices to a parallel in-memory computing primitive is a challenge. In this work, we utilize the optical properties of the PCM, Getextsubscript{2}Sbtextsubscript{2}Tetextsubscript{5} (GST), to propose a Photonic Spiking Neural Network computing primitive, comprising of a non-volatile synaptic array integrated seamlessly with previously explored `integrate-and-fire neurons. The proposed design realizes an `in-memory computing platform that leverages the inherent parallelism of wavelength-division-multiplexing (WDM). We show that the proposed computing platform can be used to emulate a SNN inferencing engine for image classification tasks. The proposed design not only bridges the gap between isolated computing devices and parallel large-scale implementation, but also paves the way for ultra-fast computing and localized on-chip learning.
The design of systems implementing low precision neural networks with emerging memories such as resistive random access memory (RRAM) is a major lead for reducing the energy consumption of artificial intelligence (AI). Multiple works have for example proposed in-memory architectures to implement low power binarized neural networks. These simple neural networks, where synaptic weights and neuronal activations assume binary values, can indeed approach state-of-the-art performance on vision tasks. In this work, we revisit one of these architectures where synapses are implemented in a differential fashion to reduce bit errors, and synaptic weights are read using precharge sense amplifiers. Based on experimental measurements on a hybrid 130 nm CMOS/RRAM chip and on circuit simulation, we show that the same memory array architecture can be used to implement ternary weights instead of binary weights, and that this technique is particularly appropriate if the sense amplifier is operated in near-threshold regime. We also show based on neural network simulation on the CIFAR-10 image recognition task that going from binary to ternary neural networks significantly increases neural network performance. These results highlight that AI circuits function may sometimes be revisited when operated in low power regimes.
Brain-inspired computing and neuromorphic hardware are promising approaches that offer great potential to overcome limitations faced by current computing paradigms based on traditional von-Neumann architecture. In this regard, interest in developing memristor crossbar arrays has increased due to their ability to natively perform in-memory computing and fundamental synaptic operations required for neural network implementation. For optimal efficiency, crossbar-based circuits need to be compatible with fabrication processes and materials of industrial CMOS technologies. Herein, we report a complete CMOS-compatible fabrication process of TiO2-based passive memristor crossbars with 700 nm wide electrodes. We show successful bottom electrode fabrication by a damascene process, resulting in an optimised topography and a surface roughness as low as 1.1 nm. DC sweeps and voltage pulse programming yield statistical results related to synaptic-like multilevel switching. Both cycle-to-cycle and device-to-device variability are investigated. Analogue programming of the conductance using sequences of 200 ns voltage pulses suggest that the fabricated memories have a multilevel capacity of at least 3 bits due to the cycle-to-cycle reproducibility.
Silicon-based Static Random Access Memories (SRAM) and digital Boolean logic have been the workhorse of the state-of-art computing platforms. Despite tremendous strides in scaling the ubiquitous metal-oxide-semiconductor transistor, the underlying te xtit{von-Neumann} computing architecture has remained unchanged. The limited throughput and energy-efficiency of the state-of-art computing systems, to a large extent, results from the well-known textit{von-Neumann bottleneck}. The energy and throughput inefficiency of the von-Neumann machines have been accentuated in recent times due to the present emphasis on data-intensive applications like artificial intelligence, machine learning textit{etc}. A possible approach towards mitigating the overhead associated with the von-Neumann bottleneck is to enable textit{in-memory} Boolean computations. In this manuscript, we present an augmented version of the conventional SRAM bit-cells, called textit{the X-SRAM}, with the ability to perform in-memory, vector Boolean computations, in addition to the usual memory storage operations. We propose at least six different schemes for enabling in-memory vector computations including NAND, NOR, IMP (implication), XOR logic gates with respect to different bit-cell topologies $-$ the 8T cell and the 8$^+$T Differential cell. In addition, we also present a novel textit{`read-compute-store} scheme, wherein the computed Boolean function can be directly stored in the memory without the need of latching the data and carrying out a subsequent write operation. The feasibility of the proposed schemes has been verified using predictive transistor models and Monte-Carlo variation analysis.
Neuromorphic hardware platforms implement biological neurons and synapses to execute spiking neural networks (SNNs) in an energy-efficient manner. We present SpiNeMap, a design methodology to map SNNs to crossbar-based neuromorphic hardware, minimizi ng spike latency and energy consumption. SpiNeMap operates in two steps: SpiNeCluster and SpiNePlacer. SpiNeCluster is a heuristic-based clustering technique to partition SNNs into clusters of synapses, where intracluster local synapses are mapped within crossbars of the hardware and inter-cluster global synapses are mapped to the shared interconnect. SpiNeCluster minimizes the number of spikes on global synapses, which reduces spike congestion on the shared interconnect, improving application performance. SpiNePlacer then finds the best placement of local and global synapses on the hardware using a meta-heuristic-based approach to minimize energy consumption and spike latency. We evaluate SpiNeMap using synthetic and realistic SNNs on the DynapSE neuromorphic hardware. We show that SpiNeMap reduces average energy consumption by 45% and average spike latency by 21%, compared to state-of-the-art techniques.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا