Do you want to publish a course? Click here

Frustrated Arrays of Nanomagnets for Efficient Reservoir Computing

107   0   0.0 ( 0 )
 Added by Alexander Edwards
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

We simulated our nanomagnet reservoir computer (NMRC) design on benchmark tasks, demonstrating NMRCs high memory content and expressibility. In support of the feasibility of this method, we fabricated a frustrated nanomagnet reservoir layer. Using this structure, we describe a low-power, low-area system with an area-energy-delay product $10^7$ lower than conventional RC systems, that is therefore promising for size, weight, and power (SWaP) constrained applications.



rate research

Read More

Reservoir computing is an emerging methodology for neuromorphic computing that is especially well-suited for hardware implementations in size, weight, and power (SWaP) constrained environments. This work proposes a novel hardware implementation of a reservoir computer using a planar nanomagnet array. A small nanomagnet reservoir is demonstrated via micromagnetic simulations to be able to identify simple waveforms with 100% accuracy. Planar nanomagnet reservoirs are a promising new solution to the growing need for dedicated neuromorphic hardware.
We propose an approximation of Echo State Networks (ESN) that can be efficiently implemented on digital hardware based on the mathematics of hyperdimensional computing. The reservoir of the proposed integer Echo State Network (intESN) is a vector containing only n-bits integers (where n<8 is normally sufficient for a satisfactory performance). The recurrent matrix multiplication is replaced with an efficient cyclic shift operation. The proposed intESN approach is verified with typical tasks in reservoir computing: memorizing of a sequence of inputs; classifying time-series; learning dynamic processes. Such architecture results in dramatic improvements in memory footprint and computational efficiency, with minimal performance loss. The experiments on a field-programmable gate array confirm that the proposed intESN approach is much more energy efficient than the conventional ESN.
The feasibility of reservoir computing based on dipole-coupled nanomagnets is demonstrated using micro-magnetic simulations. The reservoir consists of an 2x10 array of nanomagnets. The static-magnetization directions of the nanomagnets are used as reservoir states. To update these states, we change the magnetization of one nanomagnet according to a single-bit-sequential signal. We also change the uniaxial anisotropy of the other nanomagnets using a voltage-induced magnetic-anisotropy change to enhance information flow, storage, and linear/nonlinear calculations. Binary tasks with AND, OR, and XOR operations were performed to evaluate the performance of the magnetic-array reservoir. The reservoir-computing output matrix was found to be trainable to perform AND, OR, and XOR operations with an input delay of up to three bits.
The rapidity and low power consumption of superconducting electronics makes them an ideal substrate for physical reservoir computing, which commandeers the computational power inherent to the evolution of a dynamical system for the purposes of performing machine learning tasks. We focus on a subset of superconducting circuits that exhibit soliton-like dynamics in simple transmission line geometries. With numerical simulations we demonstrate the effectiveness of these circuits in performing higher-order parity calculations and channel equalization at rates approaching 100 Gb/s. The availability of a proven superconducting logic scheme considerably simplifies the path to a fully integrated reservoir computing platform and makes superconducting reservoirs an enticing substrate for high rate signal processing applications.
To rapidly process temporal information at a low metabolic cost, biological neurons integrate inputs as an analog sum but communicate with spikes, binary events in time. Analog neuromorphic hardware uses the same principles to emulate spiking neural networks with exceptional energy-efficiency. However, instantiating high-performing spiking networks on such hardware remains a significant challenge due to device mismatch and the lack of efficient training algorithms. Here, we introduce a general in-the-loop learning framework based on surrogate gradients that resolves these issues. Using the BrainScaleS-2 neuromorphic system, we show that learning self-corrects for device mismatch resulting in competitive spiking network performance on both vision and speech benchmarks. Our networks display sparse spiking activity with, on average, far less than one spike per hidden neuron and input, perform inference at rates of up to 85 k frames/second, and consume less than 200 mW. In summary, our work sets several new benchmarks for low-energy spiking network processing on analog neuromorphic hardware and paves the way for future on-chip learning algorithms.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا