ترغب بنشر مسار تعليمي؟ اضغط هنا

All-optical spiking neurosynaptic networks with self-learning capabilities

92   0   0.0 ( 0 )
 نشر من قبل Wolfram Pernice
 تاريخ النشر 2021
والبحث باللغة English




اسأل ChatGPT حول البحث

Software-implementation, via neural networks, of brain-inspired computing approaches underlie many important modern-day computational tasks, from image processing to speech recognition, artificial intelligence and deep learning applications. Yet, differing from real neural tissue, traditional computing architectures physically separate the core computing functions of memory and processing, making fast, efficient and low-energy brain-like computing difficult to achieve. To overcome such limitations, an attractive and alternative goal is to design direct hardware mimics of brain neurons and synapses which, when connected in appropriate networks (or neuromorphic systems), process information in a way more fundamentally analogous to that of real brains. Here we present an all-optical approach to achieving such a goal. Specifically, we demonstrate an all-optical spiking neuron device and connect it, via an integrated photonics network, to photonic synapses to deliver a small-scale all-optical neurosynaptic system capable of supervised and unsupervised learning. Moreover, we exploit wavelength division multiplexing techniques to implement a scalable circuit architecture for photonic neural networks, successfully demonstrating pattern recognition directly in the optical domain using a photonic system comprising 140 elements. Such optical implementations of neurosynaptic networks promise access to the high speed and bandwidth inherent to optical systems, which would be very attractive for the direct processing of telecommunication and visual data in the optical domain.



قيم البحث

اقرأ أيضاً

Deeplearning algorithms are revolutionising many aspects of modern life. Typically, they are implemented in CMOS-based hardware with severely limited memory access times and inefficient data-routing. All-optical neural networks without any electro-optic
Optical implementation of artificial neural networks has been attracting great attention due to its potential in parallel computation at speed of light. Although all-optical deep neural networks (AODNNs) with a few neurons have been experimentally de monstrated with acceptable errors recently, the feasibility of large scale AODNNs remains unknown because error might accumulate inevitably with increasing number of neurons and connections. Here, we demonstrate a scalable AODNN with programmable linear operations and tunable nonlinear activation functions. We verify its scalability by measuring and analyzing errors propagating from a single neuron to the entire network. The feasibility of AODNNs is further confirmed by recognizing handwritten digits and fashions respectively.
All-optical binary convolution with a photonic spiking vertical-cavity surface-emitting laser (VCSEL) neuron is proposed and demonstrated experimentally for the first time. Optical inputs, extracted from digital images and temporally encoded using re ctangular pulses, are injected in the VCSEL neuron which delivers the convolution result in the number of fast (<100 ps long) spikes fired. Experimental and numerical results show that binary convolution is achieved successfully with a single spiking VCSEL neuron and that all-optical binary convolution can be used to calculate image gradient magnitudes to detect edge features and separate vertical and horizontal components in source images. We also show that this all-optical spiking binary convolution system is robust to noise and can operate with high-resolution images. Additionally, the proposed system offers important advantages such as ultrafast speed, high energy efficiency and simple hardware implementation, highlighting the potentials of spiking photonic VCSEL neurons for high-speed neuromorphic image processing systems and future photonic spiking convolutional neural networks.
Spiking recurrent neural networks (RNNs) are a promising tool for solving a wide variety of complex cognitive and motor tasks, due to their rich temporal dynamics and sparse processing. However training spiking RNNs on dedicated neuromorphic hardware is still an open challenge. This is due mainly to the lack of local, hardware-friendly learning mechanisms that can solve the temporal credit assignment problem and ensure stable network dynamics, even when the weight resolution is limited. These challenges are further accentuated, if one resorts to using memristive devices for in-memory computing to resolve the von-Neumann bottleneck problem, at the expense of a substantial increase in variability in both the computation and the working memory of the spiking RNNs. To address these challenges and enable online learning in memristive neuromorphic RNNs, we present a simulation framework of differential-architecture crossbar arrays based on an accurate and comprehensive Phase-Change Memory (PCM) device model. We train a spiking RNN whose weights are emulated in the presented simulation framework, using a recently proposed e-prop learning rule. Although e-prop locally approximates the ideal synaptic updates, it is difficult to implement the updates on the memristive substrate due to substantial PCM non-idealities. We compare several widely adapted weight update schemes that primarily aim to cope with these device non-idealities and demonstrate that accumulating gradients can enable online and efficient training of spiking RNN on memristive substrates.
As neural networks get widespread adoption in resource-constrained embedded devices, there is a growing need for low-power neural systems. Spiking Neural Networks (SNNs)are emerging to be an energy-efficient alternative to the traditional Artificial Neural Networks (ANNs) which are known to be computationally intensive. From an application perspective, as federated learning involves multiple energy-constrained devices, there is a huge scope to leverage energy efficiency provided by SNNs. Despite its importance, there has been little attention on training SNNs on a large-scale distributed system like federated learning. In this paper, we bring SNNs to a more realistic federated learning scenario. Specifically, we propose a federated learning framework for decentralized and privacy-preserving training of SNNs. To validate the proposed federated learning framework, we experimentally evaluate the advantages of SNNs on various aspects of federated learning with CIFAR10 and CIFAR100 benchmarks. We observe that SNNs outperform ANNs in terms of overall accuracy by over 15% when the data is distributed across a large number of clients in the federation while providing up to5.3x energy efficiency. In addition to efficiency, we also analyze the sensitivity of the proposed federated SNN framework to data distribution among the clients, stragglers, and gradient noise and perform a comprehensive comparison with ANNs.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا