No Arabic abstract
Since the experimental discovery of magnetic skyrmions achieved one decade ago, there have been significant efforts to bring the virtual particles into all-electrical fully functional devices, inspired by their fascinating physical and topological properties suitable for future low-power electronics. Here, we experimentally demonstrate such a device: electrically-operating skyrmion-based artificial synaptic device designed for neuromorphic computing. We present that controlled current-induced creation, motion, detection and deletion of skyrmions in ferrimagnetic multilayers can be harnessed in a single device at room temperature to imitate the behaviors of biological synapses. Using simulations, we demonstrate that such skyrmion-based synapses could be used to perform neuromorphic pattern-recognition computing using handwritten recognition data set, reaching to the accuracy of ~89 percents, comparable to the software-based training accuracy of ~94 percents. Chip-level simulation then highlights the potential of skyrmion synapse compared to existing technologies. Our findings experimentally illustrate the basic concepts of skyrmion-based fully functional electronic devices while providing a new building block in the emerging field of spintronics-based bio-inspired computing.
This work reports a compact behavioral model for gated-synaptic memory. The model is developed in Verilog-A for easy integration into computer-aided design of neuromorphic circuits using emerging memory. The model encompasses various forms of gated synapses within a single framework and is not restricted to only a single type. The behavioral theory of the model is described in detail along with a full list of the default parameter settings. The model includes parameters such as a devices ideal set time, threshold voltage, general evolution of the conductance with respect to time, decay of the devices state, etc. Finally, the models validity is shown via extensive simulation and fitting to experimentally reported data on published gated-synapses.
Neuromorphic computing uses brain-inspired principles to design circuits that can perform computational tasks with superior power efficiency to conventional computers. Approaches that use traditional electronic devices to create artificial neurons and synapses are, however, currently limited by the energy and area requirements of these components. Spintronic nanodevices, which exploit both the magnetic and electrical properties of electrons, can increase the energy efficiency and decrease the area of these circuits, and magnetic tunnel junctions are of particular interest as neuromorphic computing elements because they are compatible with standard integrated circuits and can support multiple functionalities. Here we review the development of spintronic devices for neuromorphic computing. We examine how magnetic tunnel junctions can serve as synapses and neurons, and how magnetic textures, such as domain walls and skyrmions, can function as neurons. We also explore spintronics-based implementations of neuromorphic computing tasks, such as pattern recognition in an associative memory, and discuss the challenges that exist in scaling up these systems.
Ferroelectric tunnel junctions (FTJ) based on hafnium zirconium oxide (Hf1-xZrxO2; HZO) are a promising candidate for future applications, such as low-power memories and neuromorphic computing. The tunneling electroresistance (TER) is tunable through the polarization state of the HZO film. To circumvent the challenge of fabricating thin ferroelectric HZO layers in the tunneling range of 1-3 nm range, ferroelectric/dielectric double layer sandwiched between two symmetric metal electrodes are used. Due to the decoupling of the ferroelectric polarization storage layer and a dielectric tunneling layer with a higher bandgap, a significant TER ratio between the two polarization states is obtained. By exploiting previously reported switching behaviour and the gradual tunability of the resistance, FTJs can be used as potential candidates for the emulation of synapses for neuromorphic computing in spiking neural networks. The implementation of two major components of a synapse are shown: long term depression/potentiation by varying the amplitude/width/number of voltage pulses applied to the artificial FTJ synapse, and spike-timing-dependent-plasticity curves by applying time-delayed voltages at each electrode. These experimental findings show the potential of spiking neural networks and neuromorphic computing that can be implemented with hafnia-based FTJs.
To rapidly process temporal information at a low metabolic cost, biological neurons integrate inputs as an analog sum but communicate with spikes, binary events in time. Analog neuromorphic hardware uses the same principles to emulate spiking neural networks with exceptional energy-efficiency. However, instantiating high-performing spiking networks on such hardware remains a significant challenge due to device mismatch and the lack of efficient training algorithms. Here, we introduce a general in-the-loop learning framework based on surrogate gradients that resolves these issues. Using the BrainScaleS-2 neuromorphic system, we show that learning self-corrects for device mismatch resulting in competitive spiking network performance on both vision and speech benchmarks. Our networks display sparse spiking activity with, on average, far less than one spike per hidden neuron and input, perform inference at rates of up to 85 k frames/second, and consume less than 200 mW. In summary, our work sets several new benchmarks for low-energy spiking network processing on analog neuromorphic hardware and paves the way for future on-chip learning algorithms.
Research in photonic computing has flourished due to the proliferation of optoelectronic components on photonic integration platforms. Photonic integrated circuits have enabled ultrafast artificial neural networks, providing a framework for a new class of information processing machines. Algorithms running on such hardware have the potential to address the growing demand for machine learning and artificial intelligence, in areas such as medical diagnosis, telecommunications, and high-performance and scientific computing. In parallel, the development of neuromorphic electronics has highlighted challenges in that domain, in particular, related to processor latency. Neuromorphic photonics offers sub-nanosecond latencies, providing a complementary opportunity to extend the domain of artificial intelligence. Here, we review recent advances in integrated photonic neuromorphic systems, discuss current and future challenges, and outline the advances in science and technology needed to meet those challenges.