Do you want to publish a course? Click here

A Compact Gated-Synapse Model for Neuromorphic Circuits

69   0   0.0 ( 0 )
 Added by Alexander Jones
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

This work reports a compact behavioral model for gated-synaptic memory. The model is developed in Verilog-A for easy integration into computer-aided design of neuromorphic circuits using emerging memory. The model encompasses various forms of gated synapses within a single framework and is not restricted to only a single type. The behavioral theory of the model is described in detail along with a full list of the default parameter settings. The model includes parameters such as a devices ideal set time, threshold voltage, general evolution of the conductance with respect to time, decay of the devices state, etc. Finally, the models validity is shown via extensive simulation and fitting to experimentally reported data on published gated-synapses.



rate research

Read More

Since the experimental discovery of magnetic skyrmions achieved one decade ago, there have been significant efforts to bring the virtual particles into all-electrical fully functional devices, inspired by their fascinating physical and topological properties suitable for future low-power electronics. Here, we experimentally demonstrate such a device: electrically-operating skyrmion-based artificial synaptic device designed for neuromorphic computing. We present that controlled current-induced creation, motion, detection and deletion of skyrmions in ferrimagnetic multilayers can be harnessed in a single device at room temperature to imitate the behaviors of biological synapses. Using simulations, we demonstrate that such skyrmion-based synapses could be used to perform neuromorphic pattern-recognition computing using handwritten recognition data set, reaching to the accuracy of ~89 percents, comparable to the software-based training accuracy of ~94 percents. Chip-level simulation then highlights the potential of skyrmion synapse compared to existing technologies. Our findings experimentally illustrate the basic concepts of skyrmion-based fully functional electronic devices while providing a new building block in the emerging field of spintronics-based bio-inspired computing.
To rapidly process temporal information at a low metabolic cost, biological neurons integrate inputs as an analog sum but communicate with spikes, binary events in time. Analog neuromorphic hardware uses the same principles to emulate spiking neural networks with exceptional energy-efficiency. However, instantiating high-performing spiking networks on such hardware remains a significant challenge due to device mismatch and the lack of efficient training algorithms. Here, we introduce a general in-the-loop learning framework based on surrogate gradients that resolves these issues. Using the BrainScaleS-2 neuromorphic system, we show that learning self-corrects for device mismatch resulting in competitive spiking network performance on both vision and speech benchmarks. Our networks display sparse spiking activity with, on average, far less than one spike per hidden neuron and input, perform inference at rates of up to 85 k frames/second, and consume less than 200 mW. In summary, our work sets several new benchmarks for low-energy spiking network processing on analog neuromorphic hardware and paves the way for future on-chip learning algorithms.
Deep artificial neural networks (ANNs) can represent a wide range of complex functions. Implementing ANNs in Von Neumann computing systems, though, incurs a high energy cost due to the bottleneck created between CPU and memory. Implementation on neuromorphic systems may help to reduce energy demand. Conventional ANNs must be converted into equivalent Spiking Neural Networks (SNNs) in order to be deployed on neuromorphic chips. This paper presents a way to perform this translation. We map the ANN weights to SNN synapses layer-by-layer by forming a least-square-error approximation problem at each layer. An optimal set of synapse weights may then be found for a given choice of ANN activation function and SNN neuron. Using an appropriate constrained solver, we can generate SNNs compatible with digital, analog, or hybrid chip architectures. We present an optimal node pruning method to allow SNN layer sizes to be set by the designer. To illustrate this process, we convert three ANNs, including one convolutional network, to SNNs. In all three cases, a simple linear program solver was used. The experiments show that the resulting networks maintain agreement with the original ANN and excellent performance on the evaluation tasks. The networks were also reduced in size with little loss in task performance.
Non-Volatile Memories (NVMs) such as Resistive RAM (RRAM) are used in neuromorphic systems to implement high-density and low-power analog synaptic weights. Unfortunately, an RRAM cell can switch its state after reading its content a certain number of times. Such behavior challenges the integrity and program-once-read-many-times philosophy of implementing machine learning inference on neuromorphic systems, impacting the Quality-of-Service (QoS). Elevated temperatures and frequent usage can significantly shorten the number of times an RRAM cell can be reliably read before it becomes absolutely necessary to reprogram. We propose an architectural solution to extend the read endurance of RRAM-based neuromorphic systems. We make two key contributions. First, we formulate the read endurance of an RRAM cell as a function of the programmed synaptic weight and its activation within a machine learning workload. Second, we propose an intelligent workload mapping strategy incorporating the endurance formulation to place the synapses of a machine learning model onto the RRAM cells of the hardware. The objective is to extend the inference lifetime, defined as the number of times the model can be used to generate output (inference) before the trained weights need to be reprogrammed on the RRAM cells of the system. We evaluate our architectural solution with machine learning workloads on a cycle-accurate simulator of an RRAM-based neuromorphic system. Our results demonstrate a significant increase in inference lifetime with only a minimal performance impact.
A superconducting optoelectronic neuron will produce a small current pulse upon reaching threshold. We present an amplifier chain that converts this small current pulse to a voltage pulse sufficient to produce light from a semiconductor diode. This light is the signal used to communicate between neurons in the network. The amplifier chain comprises a thresholding Josephson junction, a relaxation oscillator Josephson junction, a superconducting thin-film current-gated current amplifier, and a superconducting thin-film current-gated voltage amplifier. We analyze the performance of the elements in the amplifier chain in the time domain to calculate the energy consumption per photon created for several values of light-emitting diode capacitance and efficiency. The speed of the amplification sequence allows neuronal firing up to at least 20,MHz with power density low enough to be cooled easily with standard $^4$He cryogenic systems operating at 4.2,K.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا