ترغب بنشر مسار تعليمي؟ اضغط هنا

In-Memory Nearest Neighbor Search with FeFET Multi-Bit Content-Addressable Memories

148   0   0.0 ( 0 )
 نشر من قبل Arman Kazemi
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Nearest neighbor (NN) search is an essential operation in many applications, such as one/few-shot learning and image classification. As such, fast and low-energy hardware support for accurate NN search is highly desirable. Ternary content-addressable memories (TCAMs) have been proposed to accelerate NN search for few-shot learning tasks by implementing $L_infty$ and Hamming distance metrics, but they cannot achieve software-comparable accuracies. This paper proposes a novel distance function that can be natively evaluated with multi-bit content-addressable memories (MCAMs) based on ferroelectric FETs (FeFETs) to perform a single-step, in-memory NN search. Moreover, this approach achieves accuracies comparable to floating-point precision implementations in software for NN classification and one/few-shot learning tasks. As an example, the proposed method achieves a 98.34% accuracy for a 5-way, 5-shot classification task for the Omniglot dataset (only 0.8% lower than software-based implementations) with a 3-bit MCAM. This represents a 13% accuracy improvement over state-of-the-art TCAM-based implementations at iso-energy and iso-delay. The presented distance function is resilient to the effects of FeFET device-to-device variations. Furthermore, this work experimentally demonstrates a 2-bit implementation of FeFET MCAM using AND arrays from GLOBALFOUNDRIES to further validate proof of concept.



قيم البحث

اقرأ أيضاً

Ferroelectric field effect transistors (FeFETs) are being actively investigated with the potential for in-memory computing (IMC) over other non-volatile memories (NVMs). Content Addressable Memories (CAMs) are a form of IMC that performs parallel sea rches for matched entries over a memory array for a given input query. CAMs are widely used for data-centric applications that involve pattern matching and search functionality. To accommodate the ever expanding data, it is attractive to resort to analog CAM for memory density improvement. However, the digital CAM design nowadays based on standard CMOS or emerging nonvolatile memories (e.g., resistive storage devices) is already challenging due to area, power, and cost penalties. Thus, it can be extremely expensive to achieve analog CAM with those technologies due to added cell components. As such, we propose, for the first time, a universal compact FeFET based CAM design, FeCAM, with search and storage functionality enabled in digital and analog domain simultaneously. By exploiting the multi-level-cell (MLC) states of FeFET, FeCAM can store and search inputs in either digital or analog domain. We perform a device-circuit co-design of the proposed FeCAM and validate its functionality and performance using an experimentally calibrated FeFET model. Circuit level simulation results demonstrate that FeCAM can either store continuous matching ranges or encode 3-bit data in a single CAM cell. When compared with the existing digital CMOS based CAM approaches, FeCAM is found to improve both memory density by 22.4X and energy saving by 8.6/3.2X for analog/digital modes, respectively. In the CAM-related application, our evaluations show that FeCAM can achieve 60.5X/23.1X saving in area/search energy compared with conventional CMOS based CAMs.
Ternary content addressable memory (TCAM) has been a critical component in caches, routers, etc., in which density, speed, power efficiency, and reliability are the major design targets. There have been the conventional low-write-power but bulky SRAM -based TCAM design, and also denser but less reliable or higher-write-power TCAM designs using nonvolatile memory (NVM) devices. Meanwhile, some TCAM designs using dynamic memories have been also proposed. Although dynamic design TCAM is denser than CMOS SRAM TCAM and more reliable than NVM TCAM, the conventional row-by-row refresh operations land up with a bottleneck of interference with normal TCAM activities. Therefore, this paper proposes a custom low-power dynamic TCAM using nanoelectromechanical (NEM) relay devices utilizing one-shot refresh to solve the memory refresh problem. By harnessing the unique NEM relay characteristics with a proposed novel cell structure, the proposed TCAM occupies a small footprint of only 3 transistors (with two NEM relays integrated on the top through the back-end-of-line process), which significantly outperforms the density of 16-transistor SRAM-based TCAM. In addition, evaluations show that the proposed TCAM improves the write energy efficiency by 2.31x, 131x, and 13.5x over SRAM, RRAM, and FeFET TCAMs, respectively; The search energy-delay-product is improved by 12.7x, 1.30x, and 2.83x over SRAM, RRAM, and FeFET TCAMs, respectively.
`In-memory computing is being widely explored as a novel computing paradigm to mitigate the well known memory bottleneck. This emerging paradigm aims at embedding some aspects of computations inside the memory array, thereby avoiding frequent and exp ensive movement of data between the compute unit and the storage memory. In-memory computing with respect to Silicon memories has been widely explored on various memory bit-cells. Embedding computation inside the 6 transistor (6T) SRAM array is of special interest since it is the most widely used on-chip memory. In this paper, we present a novel in-memory multiplication followed by accumulation operation capable of performing parallel dot products within 6T SRAM without any changes to the standard bitcell. We, further, study the effect of circuit non-idealities and process variations on the accuracy of the LeNet-5 and VGG neural network architectures against the MNIST and CIFAR-10 datasets, respectively. The proposed in-memory dot-product mechanism achieves 88.8% and 99% accuracy for the CIFAR-10 and MNIST, respectively. Compared to the standard von Neumann system, the proposed system is 6.24x better in energy consumption and 9.42x better in delay.
Hyperdimensional Computing (HDC) is an emerging computational framework that mimics important brain functions by operating over high-dimensional vectors, called hypervectors (HVs). In-memory computing implementations of HDC are desirable since they c an significantly reduce data transfer overheads. All existing in-memory HDC platforms consider binary HVs where each dimension is represented with a single bit. However, utilizing multi-bit HVs allows HDC to achieve acceptable accuracies in lower dimensions which in turn leads to higher energy efficiencies. Thus, we propose a highly accurate and efficient multi-bit in-memory HDC inference platform called MIMHD. MIMHD supports multi-bit operations using ferroelectric field-effect transistor (FeFET) crossbar arrays for multiply-and-add and FeFET multi-bit content-addressable memories for associative search. We also introduce a novel hardware-aware retraining framework (HWART) that trains the HDC model to learn to work with MIMHD. For six popular datasets and 4000 dimension HVs, MIMHD using 3-bit (2-bit) precision HVs achieves (i) average accuracies of 92.6% (88.9%) which is 8.5% (4.8%) higher than binary implementations; (ii) 84.1x (78.6x) energy improvement over a GPU, and (iii) 38.4x (34.3x) speedup over a GPU, respectively. The 3-bit $times$ is 4.3x and 13x faster and more energy-efficient than binary HDC accelerators while achieving similar accuracies.
Approximate nearest neighbor algorithms are used to speed up nearest neighbor search in a wide array of applications. However, current indexing methods feature several hyperparameters that need to be tuned to reach an acceptable accuracy--speed trade -off. A grid search in the parameter space is often impractically slow due to a time-consuming index-building procedure. Therefore, we propose an algorithm for automatically tuning the hyperparameters of indexing methods based on randomized space-partitioning trees. In particular, we present results using randomized k-d trees, random projection trees and randomized PCA trees. The tuning algorithm adds minimal overhead to the index-building process but is able to find the optimal hyperparameters accurately. We demonstrate that the algorithm is significantly faster than existing approaches, and that the indexing methods used are competitive with the state-of-the-art methods in query time while being faster to build.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا