ترغب بنشر مسار تعليمي؟ اضغط هنا

Analysing digital in-memory computing for advanced finFET node

71   0   0.0 ( 0 )
 نشر من قبل Veerendra S Devaraddi
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Digital In-memory computing improves energy efficiency and throughput of a data-intensive process, which incur memory thrashing and, resulting multiple same memory accesses in a von Neumann architecture. Digital in-memory computing involves accessing multiple SRAM cells simultaneously, which may result in a bit flip when not timed critically. Therefore we discuss the transient voltage characteristics of the bitlines during an SRAM compute. To improve the packaging density and also avoid MOSFET down-scaling issues, we use a 7-nm predictive PDK which uses a finFET node. The finFET process has discrete fins and a lower Voltage supply, which makes the design of in-memory compute SRAM difficult. In this paper, we design a 6T SRAM cell in 7-nm finFET node and compare its SNMs with a UMC 28nm node implementation. Further, we design and simulate the rest of the SRAM peripherals, and in-memory computation for an advanced finFET node.



قيم البحث

اقرأ أيضاً

This paper describes how 3D XPoint memory arrays can be used as in-memory computing accelerators. We first show that thresholded matrix-vector multiplication (TMVM), the fundamental computational kernel in many applications including machine learning , can be implemented within a 3D XPoint array without requiring data to leave the array for processing. Using the implementation of TMVM, we then discuss the implementation of a binary neural inference engine. We discuss the application of the core concept to address issues such as system scalability, where we connect multiple 3D XPoint arrays, and power integrity, where we analyze the parasitic effects of metal lines on noise margins. To assure power integrity within the 3D XPoint array during this implementation, we carefully analyze the parasitic effects of metal lines on the accuracy of the implementations. We quantify the impact of parasitics on limiting the size and configuration of a 3D XPoint array, and estimate the maximum acceptable size of a 3D XPoint subarray.
A compact, accurate, and bitwidth-programmable in-memory computing (IMC) static random-access memory (SRAM) macro, named CAP-RAM, is presented for energy-efficient convolutional neural network (CNN) inference. It leverages a novel charge-domain multi ply-and-accumulate (MAC) mechanism and circuitry to achieve superior linearity under process variations compared to conventional IMC designs. The adopted semi-parallel architecture efficiently stores filters from multiple CNN layers by sharing eight standard 6T SRAM cells with one charge-domain MAC circuit. Moreover, up to six levels of bit-width of weights with two encoding schemes and eight levels of input activations are supported. A 7-bit charge-injection SAR (ciSAR) analog-to-digital converter (ADC) getting rid of sample and hold (S&H) and input/reference buffers further improves the overall energy efficiency and throughput. A 65-nm prototype validates the excellent linearity and computing accuracy of CAP-RAM. A single 512x128 macro stores a complete pruned and quantized CNN model to achieve 98.8% inference accuracy on the MNIST data set and 89.0% on the CIFAR-10 data set, with a 573.4-giga operations per second (GOPS) peak throughput and a 49.4-tera operations per second (TOPS)/W energy efficiency.
The amount of CO$_2$ emitted per kilowatt-hour on an electricity grid varies by time of day and substantially varies by location due to the types of generation. Networked collections of warehouse scale computers, sometimes called Hyperscale Computing , emit more carbon than needed if operated without regard to these variations in carbon intensity. This paper introduces Googles system for Carbon-Intelligent Compute Management, which actively minimizes electricity-based carbon footprint and power infrastructure costs by delaying temporally flexible workloads. The core component of the system is a suite of analytical pipelines used to gather the next days carbon intensity forecasts, train day-ahead demand prediction models, and use risk-aware optimization to generate the next days carbon-aware Virtual Capacity Curves (VCCs) for all datacenter clusters across Googles fleet. VCCs impose hourly limits on resources available to temporally flexible workloads while preserving overall daily capacity, enabling all such workloads to complete within a day. Data from operation shows that VCCs effectively limit hourly capacity when the grids energy supply mix is carbon intensive and delay the execution of temporally flexible workloads to greener times.
The inherent dynamics of the neuron membrane potential in Spiking Neural Networks (SNNs) allows processing of sequential learning tasks, avoiding the complexity of recurrent neural networks. The highly-sparse spike-based computations in such spatio-t emporal data can be leveraged for energy-efficiency. However, the membrane potential incurs additional memory access bottlenecks in current SNN hardware. To that effect, we propose a 10T-SRAM compute-in-memory (CIM) macro, specifically designed for state-of-the-art SNN inference. It consists of a fused weight (WMEM) and membrane potential (VMEM) memory and inherently exploits sparsity in input spikes leading to 97.4% reduction in energy-delay-product (EDP) at 85% sparsity (typical of SNNs considered in this work) compared to the case of no sparsity. We propose staggered data mapping and reconfigurable peripherals for handling different bit-precision requirements of WMEM and VMEM, while supporting multiple neuron functionalities. The proposed macro was fabricated in 65nm CMOS technology, achieving an energy-efficiency of 0.99TOPS/W at 0.85V supply and 200MHz frequency for signed 11-bit operations. We evaluate the SNN for sentiment classification from the IMDB dataset of movie reviews and achieve within 1% accuracy of an LSTM network with 8.5x lower parameters.
3D integration, i.e., stacking of integrated circuit layers using parallel or sequential processing is gaining rapid industry adoption with the slowdown of Moores law scaling. 3D stacking promises potential gains in performance, power and cost but th e actual magnitude of gains varies depending on end-application, technology choices and design. In this talk, we will discuss some key challenges associated with 3D design and how design-for-3D will require us to break traditional silos of micro-architecture, circuit/physical design and manufacturing technology to work across abstractions to enable the gains promised by 3D technologies.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا