ترغب بنشر مسار تعليمي؟ اضغط هنا

Theory and learning protocols for the material tempotron model

80   0   0.0 ( 0 )
 نشر من قبل Alfredo Braunstein
 تاريخ النشر 2013
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Neural networks are able to extract information from the timing of spikes. Here we provide new results on the behavior of the simplest neuronal model which is able to decode information embedded in temporal spike patterns, the so called tempotron. Using statistical physics techniques we compute the capacity for the case of sparse, time-discretized input, and material discrete synapses, showing that the device saturates the information theoretic bounds with a statistics of output spikes that is consistent with the statistics of the inputs. We also derive two simple and highly efficient learning algorithms which are able to learn a number of associations which are close to the theoretical limit. The simple

قيم البحث

اقرأ أيضاً

Recent advances in deep learning and neural networks have led to an increased interest in the application of generative models in statistical and condensed matter physics. In particular, restricted Boltzmann machines (RBMs) and variational autoencode rs (VAEs) as specific classes of neural networks have been successfully applied in the context of physical feature extraction and representation learning. Despite these successes, however, there is only limited understanding of their representational properties and limitations. To better understand the representational characteristics of RBMs and VAEs, we study their ability to capture physical features of the Ising model at different temperatures. This approach allows us to quantitatively assess learned representations by comparing sample features with corresponding theoretical predictions. Our results suggest that the considered RBMs and convolutional VAEs are able to capture the temperature dependence of magnetization, energy, and spin-spin correlations. The samples generated by RBMs are more evenly distributed across temperature than those generated by VAEs. We also find that convolutional layers in VAEs are important to model spin correlations whereas RBMs achieve similar or even better performances without convolutional filters.
Anticipation is a strategy used by neural fields to compensate for transmission and processing delays during the tracking of dynamical information, and can be achieved by slow, localized, inhibitory feedback mechanisms such as short-term synaptic dep ression, spike-frequency adaptation, or inhibitory feedback from other layers. Based on the translational symmetry of the mobile network states, we derive generic fluctuation-response relations, providing unified predictions that link their tracking behaviors in the presence of external stimuli to the intrinsic dynamics of the neural fields in their absence.
A universal supervised neural network (NN) relevant to compute the associated criticalities of real experiments studying phase transitions is constructed. The validity of the built NN is examined by applying it to calculate the criticalities of sever al three-dimensional (3D) models on the cubic lattice, including the classical $O(3)$ model, the 5-state ferromagnetic Potts model, and a dimerized quantum antiferromagnetic Heisenberg model. Particularly, although the considered NN is only trained one time on a one-dimensional (1D) lattice with 120 sites, yet it has successfully determined the related critical points of the studied 3D systems. Moreover, real configurations of states are not used in the testing stage. Instead, the employed configurations for the prediction are constructed on a 1D lattice of 120 sites and are based on the bulk quantities or the microscopic states of the considered models. As a result, our calculations are ultimately efficient in computation and the applications of the built NN is extremely broaden. Considering the fact that the investigated systems vary dramatically from each other, it is amazing that the combination of these two strategies in the training and the testing stages lead to a highly universal supervised neural network for learning phases and criticalities of 3D models. Based on the outcomes presented in this study, it is favorably probable that much simpler but yet elegant machine learning techniques can be constructed for fields of many-body systems other than the critical phenomena.
90 - Ulisse Ferrari 2015
Maximum entropy models provide the least constrained probability distributions that reproduce statistical properties of experimental datasets. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large b ut finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a rectified Data-Driven algorithm that is fast and by sampling from the parameters posterior avoids both under- and over-fitting along all the directions of the parameters space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method.
Neuronal ensemble inference is a significant problem in the study of biological neural networks. Various methods have been proposed for ensemble inference from experimental data of neuronal activity. Among them, Bayesian inference approach with gener ative model was proposed recently. However, this method requires large computational cost for appropriate inference. In this work, we give an improved Bayesian inference algorithm by modifying update rule in Markov chain Monte Carlo method and introducing the idea of simulated annealing for hyperparameter control. We compare the performance of ensemble inference between our algorithm and the original one, and discuss the advantage of our method.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا