Do you want to publish a course? Click here

PIRM: Processing In Racetrack Memories

124   0   0.0 ( 0 )
 Added by Sebastien Ollivier
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

The growth in data needs of modern applications has created significant challenges for modern systems leading a memory wall. Spintronic Domain Wall Memory (DWM), related to Spin-Transfer Torque Memory (STT-MRAM), provides near-SRAM read/write performance, energy savings and nonvolatility, potential for extremely high storage density, and does not have significant endurance limitations. However, DWMs benefits cannot address data access latency and throughput limitations of memory bus bandwidth. We propose PIRM, a DWM-based in-memory computing solution that leverages the properties of DWM nanowires and allows them to serve as polymorphic gates. While normally DWM is accessed by applying spin polarized currents orthogonal to the nanowire at access points to read individual bits, transverse access along the DWM nanowire allows the differentiation of the aggregate resistance of multiple bits in the nanowire, akin to a multilevel cell. PIRM leverages this transverse reading to directly provide bulk-bitwise logic of multiple adjacent operands in the nanowire, simultaneously. Based on this in-memory logic, PIRM provides a technique to conduct multi-operand addition and two operand multiplication using transverse access. PIRM provides a 1.6x speedup compared to the leading DRAM PIM technique for query applications that leverage bulk bitwise operations. Compared to the leading PIM technique for DWM, PIRM improves performance by 6.9x, 2.3x and energy by 5.5x, 3.4x for 8-bit addition and multiplication, respectively. For arithmetic heavy benchmarks, PIRM reduces access latency by 2.1x, while decreasing energy consumption by 25.2x for a reasonable 10% area overhead versus non-PIM DWM.

rate research

Read More

Racetrack memories (RMs) have significantly evolved since their conception in 2008, making them a serious contender in the field of emerging memory technologies. Despite key technological advancements, the access latency and energy consumption of an RM-based system are still highly influenced by the number of shift operations. These operations are required to move bits to the right positions in the racetracks. This paper presents data placement techniques for RMs that maximize the likelihood that consecutive references access nearby memory locations at runtime thereby minimizing the number of shifts. We present an integer linear programming (ILP) formulation for optimal data placement in RMs, and revisit existing offset assignment heuristics, originally proposed for random-access memories. We introduce a novel heuristic tailored to a realistic RM and combine it with a genetic search to further improve the solution. We show a reduction in the number of shifts of up to 52.5%, outperforming the state of the art by up to 16.1%.
Silicon-based Static Random Access Memories (SRAM) and digital Boolean logic have been the workhorse of the state-of-art computing platforms. Despite tremendous strides in scaling the ubiquitous metal-oxide-semiconductor transistor, the underlying textit{von-Neumann} computing architecture has remained unchanged. The limited throughput and energy-efficiency of the state-of-art computing systems, to a large extent, results from the well-known textit{von-Neumann bottleneck}. The energy and throughput inefficiency of the von-Neumann machines have been accentuated in recent times due to the present emphasis on data-intensive applications like artificial intelligence, machine learning textit{etc}. A possible approach towards mitigating the overhead associated with the von-Neumann bottleneck is to enable textit{in-memory} Boolean computations. In this manuscript, we present an augmented version of the conventional SRAM bit-cells, called textit{the X-SRAM}, with the ability to perform in-memory, vector Boolean computations, in addition to the usual memory storage operations. We propose at least six different schemes for enabling in-memory vector computations including NAND, NOR, IMP (implication), XOR logic gates with respect to different bit-cell topologies $-$ the 8T cell and the 8$^+$T Differential cell. In addition, we also present a novel textit{`read-compute-store} scheme, wherein the computed Boolean function can be directly stored in the memory without the need of latching the data and carrying out a subsequent write operation. The feasibility of the proposed schemes has been verified using predictive transistor models and Monte-Carlo variation analysis.
Nearest neighbor (NN) search is an essential operation in many applications, such as one/few-shot learning and image classification. As such, fast and low-energy hardware support for accurate NN search is highly desirable. Ternary content-addressable memories (TCAMs) have been proposed to accelerate NN search for few-shot learning tasks by implementing $L_infty$ and Hamming distance metrics, but they cannot achieve software-comparable accuracies. This paper proposes a novel distance function that can be natively evaluated with multi-bit content-addressable memories (MCAMs) based on ferroelectric FETs (FeFETs) to perform a single-step, in-memory NN search. Moreover, this approach achieves accuracies comparable to floating-point precision implementations in software for NN classification and one/few-shot learning tasks. As an example, the proposed method achieves a 98.34% accuracy for a 5-way, 5-shot classification task for the Omniglot dataset (only 0.8% lower than software-based implementations) with a 3-bit MCAM. This represents a 13% accuracy improvement over state-of-the-art TCAM-based implementations at iso-energy and iso-delay. The presented distance function is resilient to the effects of FeFET device-to-device variations. Furthermore, this work experimentally demonstrates a 2-bit implementation of FeFET MCAM using AND arrays from GLOBALFOUNDRIES to further validate proof of concept.
Topologically distinct magnetic structures like skyrmions, domain walls, and the uniformly magnetized state have multiple applications in logic devices, sensors, and as bits of information. One of the most promising concepts for applying these bits is the racetrack architecture controlled by electric currents or magnetic driving fields. In state-of-the-art racetracks, these fields or currents are applied to the whole circuit. Here, we employ micromagnetic and atomistic simulations to establish a concept for racetrack memories free of global driving forces. Surprisingly, we realize that mixed sequences of topologically distinct objects can be created and propagated over far distances exclusively by local rotation of magnetization at the sample boundaries. We reveal the dependence between the chirality of the rotation and the direction of propagation and define the phase space where the proposed procedure can be realized. The advantages of this approach are the exclusion of high current and field densities as well as its compatibility with an energy-efficient three-dimensional design.
Massive multiple-input multiple-output (MIMO) systems are considered as one of the leading technologies employed in the next generations of wireless communication networks (5G), which promise to provide higher spectral efficiency, lower latency, and more reliability. Due to the massive number of devices served by the base stations (BS) equipped with large antenna arrays, massive-MIMO systems need to perform high-dimensional signal processing in a considerably short amount of time. The computational complexity of such data processing, while satisfying the energy and latency requirements, is beyond the capabilities of the conventional widely-used digital electronics-based computing, i.e., Field-Programmable Gate Arrays (FPGAs) and Application-Specific Integrated Circuits (ASICs). In this paper, the speed and lossless propagation of light is exploited to introduce a photonic computing approach that addresses the high computational complexity required by massive-MIMO systems. The proposed computing approach is based on photonic implementation of multiply and accumulate (MAC) operation achieved by broadcast-and-weight (B&W) architecture. The B&W protocol is limited to real and positive values to perform MAC operations. In this work, preprocessing steps are developed to enable the proposed photonic computing architecture to accept any arbitrary values as the input. This is a requirement for wireless communication systems that typically deal with complex values. Numerical analysis shows that the performance of the wireless communication system is not degraded by the proposed photonic computing architecture, while it provides significant improvements in time and energy efficiency for massive-MIMO systems as compared to the most powerful Graphics Processing Units (GPUs).
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا