ترغب بنشر مسار تعليمي؟ اضغط هنا

Superconducting Optoelectronic Neurons III: Synaptic Plasticity

136   0   0.0 ( 0 )
 نشر من قبل Jeffrey Shainline
 تاريخ النشر 2018
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

As a means of dynamically reconfiguring the synaptic weight of a superconducting optoelectronic loop neuron, a superconducting flux storage loop is inductively coupled to the synaptic current bias of the neuron. A standard flux memory cell is used to achieve a binary synapse, and loops capable of storing many flux quanta are used to enact multi-stable synapses. Circuits are designed to implement supervised learning wherein current pulses add or remove flux from the loop to strengthen or weaken the synaptic weight. Designs are presented for circuits with hundreds of intermediate synaptic weights between minimum and maximum strengths. Circuits for implementing unsupervised learning are modeled using two photons to strengthen and two photons to weaken the synaptic weight via Hebbian and anti-Hebbian learning rules, and techniques are proposed to control the learning rate. Implementation of short-term plasticity, homeostatic plasticity, and metaplasticity in loop neurons is discussed.

قيم البحث

اقرأ أيضاً

The design of neural hardware is informed by the prominence of differentiated processing and information integration in cognitive systems. The central role of communication leads to the principal assumption of the hardware platform: signals between n eurons should be optical to enable fanout and communication with minimal delay. The requirement of energy efficiency leads to the utilization of superconducting detectors to receive single-photon signals. We discuss the potential of superconducting optoelectronic hardware to achieve the spatial and temporal information integration advantageous for cognitive processing, and we consider physical scaling limits based on light-speed communication. We introduce the superconducting optoelectronic neurons and networks that are the subject of the subsequent papers in this series.
A superconducting optoelectronic neuron will produce a small current pulse upon reaching threshold. We present an amplifier chain that converts this small current pulse to a voltage pulse sufficient to produce light from a semiconductor diode. This l ight is the signal used to communicate between neurons in the network. The amplifier chain comprises a thresholding Josephson junction, a relaxation oscillator Josephson junction, a superconducting thin-film current-gated current amplifier, and a superconducting thin-film current-gated voltage amplifier. We analyze the performance of the elements in the amplifier chain in the time domain to calculate the energy consumption per photon created for several values of light-emitting diode capacitance and efficiency. The speed of the amplification sequence allows neuronal firing up to at least 20,MHz with power density low enough to be cooled easily with standard $^4$He cryogenic systems operating at 4.2,K.
Optical communication achieves high fanout and short delay advantageous for information integration in neural systems. Superconducting detectors enable signaling with single photons for maximal energy efficiency. We present designs of superconducting optoelectronic neurons based on superconducting single-photon detectors, Josephson junctions, semiconductor light sources, and multi-planar dielectric waveguides. These circuits achieve complex synaptic and neuronal functions with high energy efficiency, leveraging the strengths of light for communication and superconducting electronics for computation. The neurons send few-photon signals to synaptic connections. These signals communicate neuronal firing events as well as update synaptic weights. Spike-timing-dependent plasticity is implemented with a single photon triggering each step of the process. Microscale light-emitting diodes and waveguide networks enable connectivity from a neuron to thousands of synaptic connections, and the use of light for communication enables synchronization of neurons across an area limited only by the distance light can travel within the period of a network oscillation. Experimentally, each of the requisite circuit elements has been demonstrated, yet a hardware platform combining them all has not been attempted. Compared to digital logic or quantum computing, device tolerances are relaxed. For this neural application, optical sources providing incoherent pulses with 10,000 photons produced with efficiency of 10$^{-3}$ operating at 20,MHz at 4.2,K are sufficient to enable a massively scalable neural computing platform with connectivity comparable to the brain and thirty thousand times higher speed.
Circuits using superconducting single-photon detectors and Josephson junctions to perform signal reception, synaptic weighting, and integration are investigated. The circuits convert photon-detection events into flux quanta, the number of which is de termined by the synaptic weight. The current from many synaptic connections is inductively coupled to a superconducting loop that implements the neuronal threshold operation. Designs are presented for synapses and neurons that perform integration as well as detect coincidence events for temporal coding. Both excitatory and inhibitory connections are demonstrated. It is shown that a neuron with a single integration loop can receive input from 1000 such synaptic connections, and neurons of similar design could employ many loops for dendritic processing.
The recently proposed network model, Operational Neural Networks (ONNs), can generalize the conventional Convolutional Neural Networks (CNNs) that are homogenous only with a linear neuron model. As a heterogenous network model, ONNs are based on a ge neralized neuron model that can encapsulate any set of non-linear operators to boost diversity and to learn highly complex and multi-modal functions or spaces with minimal network complexity and training data. However, the default search method to find optimal operators in ONNs, the so-called Greedy Iterative Search (GIS) method, usually takes several training sessions to find a single operator set per layer. This is not only computationally demanding, also the network heterogeneity is limited since the same set of operators will then be used for all neurons in each layer. To address this deficiency and exploit a superior level of heterogeneity, in this study the focus is drawn on searching the best-possible operator set(s) for the hidden neurons of the network based on the Synaptic Plasticity paradigm that poses the essential learning theory in biological neurons. During training, each operator set in the library can be evaluated by their synaptic plasticity level, ranked from the worst to the best, and an elite ONN can then be configured using the top ranked operator sets found at each hidden layer. Experimental results over highly challenging problems demonstrate that the elite ONNs even with few neurons and layers can achieve a superior learning performance than GIS-based ONNs and as a result the performance gap over the CNNs further widens.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا