ﻻ يوجد ملخص باللغة العربية
Hyperdimensional Computing (HDC) is an emerging computational framework that mimics important brain functions by operating over high-dimensional vectors, called hypervectors (HVs). In-memory computing implementations of HDC are desirable since they can significantly reduce data transfer overheads. All existing in-memory HDC platforms consider binary HVs where each dimension is represented with a single bit. However, utilizing multi-bit HVs allows HDC to achieve acceptable accuracies in lower dimensions which in turn leads to higher energy efficiencies. Thus, we propose a highly accurate and efficient multi-bit in-memory HDC inference platform called MIMHD. MIMHD supports multi-bit operations using ferroelectric field-effect transistor (FeFET) crossbar arrays for multiply-and-add and FeFET multi-bit content-addressable memories for associative search. We also introduce a novel hardware-aware retraining framework (HWART) that trains the HDC model to learn to work with MIMHD. For six popular datasets and 4000 dimension HVs, MIMHD using 3-bit (2-bit) precision HVs achieves (i) average accuracies of 92.6% (88.9%) which is 8.5% (4.8%) higher than binary implementations; (ii) 84.1x (78.6x) energy improvement over a GPU, and (iii) 38.4x (34.3x) speedup over a GPU, respectively. The 3-bit $times$ is 4.3x and 13x faster and more energy-efficient than binary HDC accelerators while achieving similar accuracies.
One viable solution for continuous reduction in energy-per-operation is to rethink functionality to cope with uncertainty by adopting computational approaches that are inherently robust to uncertainty. It requires a novel look at data representations
The emerging brain-inspired computing paradigm known as hyperdimensional computing (HDC) has been proven to provide a lightweight learning framework for various cognitive tasks compared to the widely used deep learning-based approaches. Spatio-tempor
In-memory computing is a promising non-von Neumann approach for making energy-efficient deep learning inference hardware. Crossbar arrays of resistive memory devices can be used to encode the network weights and perform efficient analog matrix-vector
Computing-in-memory (CIM) is proposed to alleviate the processor-memory data transfer bottleneck in traditional Von-Neumann architectures, and spintronics-based magnetic memory has demonstrated many facilitation in implementing CIM paradigm. Since ha
Bayesian inference is an effective approach for solving statistical learning problems, especially with uncertainty and incompleteness. However, Bayesian inference is a computing-intensive task whose efficiency is physically limited by the bottlenecks