ترغب بنشر مسار تعليمي؟ اضغط هنا

Backpropagation through nonlinear units for all-optical training of neural networks

66   0   0.0 ( 0 )
 نشر من قبل Xianxin Guo
 تاريخ النشر 2019
والبحث باللغة English




اسأل ChatGPT حول البحث

Backpropagation through nonlinear neurons is an outstanding challenge to the field of optical neural networks and the major conceptual barrier to all-optical training schemes. Each neuron is required to exhibit a directionally dependent response to propagating optical signals, with the backwards response conditioned on the forward signal, which is highly non-trivial to implement optically. We propose a practical and surprisingly simple solution that uses saturable absorption to provide the network nonlinearity. We find that the backward propagating gradients required to train the network can be approximated in a pump-probe scheme that requires only passive optical elements. Simulations show that, with readily obtainable optical depths, our approach can achieve equivalent performance to state-of-the-art computational networks on image classification benchmarks, even in deep networks with multiple sequential gradient approximations. This scheme is compatible with leading optical neural network proposals and therefore provides a feasible path towards end-to-end optical training.



قيم البحث

اقرأ أيضاً

Recently, integrated optics has gained interest as a hardware platform for implementing machine learning algorithms. Of particular interest are artificial neural networks, since matrix-vector multi- plications, which are used heavily in artificial ne ural networks, can be done efficiently in photonic circuits. The training of an artificial neural network is a crucial step in its application. However, currently on the integrated photonics platform there is no efficient protocol for the training of these networks. In this work, we introduce a method that enables highly efficient, in situ training of a photonic neural network. We use adjoint variable methods to derive the photonic analogue of the backpropagation algorithm, which is the standard method for computing gradients of conventional neural networks. We further show how these gradients may be obtained exactly by performing intensity measurements within the device. As an application, we demonstrate the training of a numerically simulated photonic artificial neural network. Beyond the training of photonic machine learning implementations, our method may also be of broad interest to experimental sensitivity analysis of photonic systems and the optimization of reconfigurable optics platforms.
Deep spiking neural networks (SNNs) hold great potential for improving the latency and energy efficiency of deep neural networks through event-based computation. However, training such networks is difficult due to the non-differentiable nature of asy nchronous spike events. In this paper, we introduce a novel technique, which treats the membrane potentials of spiking neurons as differentiable signals, where discontinuities at spike times are only considered as noise. This enables an error backpropagation mechanism for deep SNNs, which works directly on spike signals and membrane potentials. Thus, compared with previous methods relying on indirect training and conversion, our technique has the potential to capture the statics of spikes more precisely. Our novel framework outperforms all previously reported results for SNNs on the permutation invariant MNIST benchmark, as well as the N-MNIST benchmark recorded with event-based vision sensors.
Probabilistic machine learning enabled by the Bayesian formulation has recently gained significant attention in the domain of automated reasoning and decision-making. While impressive strides have been made recently to scale up the performance of dee p Bayesian neural networks, they have been primarily standalone software efforts without any regard to the underlying hardware implementation. In this paper, we propose an All-Spin Bayesian Neural Network where the underlying spintronic hardware provides a better match to the Bayesian computing models. To the best of our knowledge, this is the first exploration of a Bayesian neural hardware accelerator enabled by emerging post-CMOS technologies. We develop an experimentally calibrated device-circuit-algorithm co-simulation framework and demonstrate $24times$ reduction in energy consumption against an iso-network CMOS baseline implementation.
99 - Wenrui Zhang , Peng Li 2019
Spiking neural networks (SNNs) well support spatiotemporal learning and energy-efficient event-driven hardware neuromorphic processors. As an important class of SNNs, recurrent spiking neural networks (RSNNs) possess great computational power. Howeve r, the practical application of RSNNs is severely limited by challenges in training. Biologically-inspired unsupervised learning has limited capability in boosting the performance of RSNNs. On the other hand, existing backpropagation (BP) methods suffer from high complexity of unrolling in time, vanishing and exploding gradients, and approximate differentiation of discontinuous spiking activities when applied to RSNNs. To enable supervised training of RSNNs under a well-defined loss function, we present a novel Spike-Train level RSNNs Backpropagation (ST-RSBP) algorithm for training deep RSNNs. The proposed ST-RSBP directly computes the gradient of a rated-coded loss function defined at the output layer of the network w.r.t tunable parameters. The scalability of ST-RSBP is achieved by the proposed spike-train level computation during which temporal effects of the SNN is captured in both the forward and backward pass of BP. Our ST-RSBP algorithm can be broadly applied to RSNNs with a single recurrent layer or deep RSNNs with multiple feed-forward and recurrent layers. Based upon challenging speech and image datasets including TI46, N-TIDIGITS, Fashion-MNIST and MNIST, ST-RSBP is able to train RSNNs with an accuracy surpassing that of the current state-of-art SNN BP algorithms and conventional non-spiking deep learning models.
Compared to Multilayer Neural Networks with real weights, Binary Multilayer Neural Networks (BMNNs) can be implemented more efficiently on dedicated hardware. BMNNs have been demonstrated to be effective on binary classification tasks with Expectatio n BackPropagation (EBP) algorithm on high dimensional text datasets. In this paper, we investigate the capability of BMNNs using the EBP algorithm on multiclass image classification tasks. The performances of binary neural networks with multiple hidden layers and different numbers of hidden units are examined on MNIST. We also explore the effectiveness of image spatial filters and the dropout technique in BMNNs. Experimental results on MNIST dataset show that EBP can obtain 2.12% test error with binary weights and 1.66% test error with real weights, which is comparable to the results of standard BackPropagation algorithm on fully connected MNNs.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا