ترغب بنشر مسار تعليمي؟ اضغط هنا

Superconducting Optoelectronic Neurons I: General Principles

69   0   0.0 ( 0 )
 نشر من قبل Jeffrey Shainline
 تاريخ النشر 2018
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

The design of neural hardware is informed by the prominence of differentiated processing and information integration in cognitive systems. The central role of communication leads to the principal assumption of the hardware platform: signals between neurons should be optical to enable fanout and communication with minimal delay. The requirement of energy efficiency leads to the utilization of superconducting detectors to receive single-photon signals. We discuss the potential of superconducting optoelectronic hardware to achieve the spatial and temporal information integration advantageous for cognitive processing, and we consider physical scaling limits based on light-speed communication. We introduce the superconducting optoelectronic neurons and networks that are the subject of the subsequent papers in this series.



قيم البحث

اقرأ أيضاً

As a means of dynamically reconfiguring the synaptic weight of a superconducting optoelectronic loop neuron, a superconducting flux storage loop is inductively coupled to the synaptic current bias of the neuron. A standard flux memory cell is used to achieve a binary synapse, and loops capable of storing many flux quanta are used to enact multi-stable synapses. Circuits are designed to implement supervised learning wherein current pulses add or remove flux from the loop to strengthen or weaken the synaptic weight. Designs are presented for circuits with hundreds of intermediate synaptic weights between minimum and maximum strengths. Circuits for implementing unsupervised learning are modeled using two photons to strengthen and two photons to weaken the synaptic weight via Hebbian and anti-Hebbian learning rules, and techniques are proposed to control the learning rate. Implementation of short-term plasticity, homeostatic plasticity, and metaplasticity in loop neurons is discussed.
A superconducting optoelectronic neuron will produce a small current pulse upon reaching threshold. We present an amplifier chain that converts this small current pulse to a voltage pulse sufficient to produce light from a semiconductor diode. This l ight is the signal used to communicate between neurons in the network. The amplifier chain comprises a thresholding Josephson junction, a relaxation oscillator Josephson junction, a superconducting thin-film current-gated current amplifier, and a superconducting thin-film current-gated voltage amplifier. We analyze the performance of the elements in the amplifier chain in the time domain to calculate the energy consumption per photon created for several values of light-emitting diode capacitance and efficiency. The speed of the amplification sequence allows neuronal firing up to at least 20,MHz with power density low enough to be cooled easily with standard $^4$He cryogenic systems operating at 4.2,K.
Optical communication achieves high fanout and short delay advantageous for information integration in neural systems. Superconducting detectors enable signaling with single photons for maximal energy efficiency. We present designs of superconducting optoelectronic neurons based on superconducting single-photon detectors, Josephson junctions, semiconductor light sources, and multi-planar dielectric waveguides. These circuits achieve complex synaptic and neuronal functions with high energy efficiency, leveraging the strengths of light for communication and superconducting electronics for computation. The neurons send few-photon signals to synaptic connections. These signals communicate neuronal firing events as well as update synaptic weights. Spike-timing-dependent plasticity is implemented with a single photon triggering each step of the process. Microscale light-emitting diodes and waveguide networks enable connectivity from a neuron to thousands of synaptic connections, and the use of light for communication enables synchronization of neurons across an area limited only by the distance light can travel within the period of a network oscillation. Experimentally, each of the requisite circuit elements has been demonstrated, yet a hardware platform combining them all has not been attempted. Compared to digital logic or quantum computing, device tolerances are relaxed. For this neural application, optical sources providing incoherent pulses with 10,000 photons produced with efficiency of 10$^{-3}$ operating at 20,MHz at 4.2,K are sufficient to enable a massively scalable neural computing platform with connectivity comparable to the brain and thirty thousand times higher speed.
Circuits using superconducting single-photon detectors and Josephson junctions to perform signal reception, synaptic weighting, and integration are investigated. The circuits convert photon-detection events into flux quanta, the number of which is de termined by the synaptic weight. The current from many synaptic connections is inductively coupled to a superconducting loop that implements the neuronal threshold operation. Designs are presented for synapses and neurons that perform integration as well as detect coincidence events for temporal coding. Both excitatory and inhibitory connections are demonstrated. It is shown that a neuron with a single integration loop can receive input from 1000 such synaptic connections, and neurons of similar design could employ many loops for dendritic processing.
We introduce a new supervised learning algorithm based to train spiking neural networks for classification. The algorithm overcomes a limitation of existing multi-spike learning methods: it solves the problem of interference between interacting outpu t spikes during a learning trial. This problem of learning interference causes learning performance in existing approaches to decrease as the number of output spikes increases, and represents an important limitation in existing multi-spike learning approaches. We address learning interference by introducing a novel mechanism to balance the magnitudes of weight adjustments during learning, which in theory allows every spike to simultaneously converge to their desired timings. Our results indicate that our method achieves significantly higher memory capacity and faster convergence compared to existing approaches for multi-spike classification. In the ubiquitous Iris and MNIST datasets, our algorithm achieves competitive predictive performance with state-of-the-art approaches.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا