ترغب بنشر مسار تعليمي؟ اضغط هنا

Learning may need only a few bits of synaptic precision

88   0   0.0 ( 0 )
 نشر من قبل Carlo Baldassi
 تاريخ النشر 2016
  مجال البحث فيزياء علم الأحياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Learning in neural networks poses peculiar challenges when using discretized rather then continuous synaptic states. The choice of discrete synapses is motivated by biological reasoning and experiments, and possibly by hardware implementation considerations as well. In this paper we extend a previous large deviations analysis which unveiled the existence of peculiar dense regions in the space of synaptic states which accounts for the possibility of learning efficiently in networks with binary synapses. We extend the analysis to synapses with multiple states and generally more plausible biological features. The results clearly indicate that the overall qualitative picture is unchanged with respect to the binary case, and very robust to variation of the details of the model. We also provide quantitative results which suggest that the advantages of increasing the synaptic precision (i.e.~the number of internal synaptic states) rapidly vanish after the first few bits, and therefore that, for practical applications, only few bits may be needed for near-optimal performance, consistently with recent biological findings. Finally, we demonstrate how the theoretical analysis can be exploited to design efficient algorithmic search strategies.



قيم البحث

اقرأ أيضاً

While deep learning has led to remarkable advances across diverse applications, it struggles in domains where the data distribution changes over the course of learning. In stark contrast, biological neural networks continually adapt to changing domai ns, possibly by leveraging complex molecular machinery to solve many tasks simultaneously. In this study, we introduce intelligent synapses that bring some of this biological complexity into artificial neural networks. Each synapse accumulates task relevant information over time, and exploits this information to rapidly store new memories without forgetting old ones. We evaluate our approach on continual learning of classification tasks, and show that it dramatically reduces forgetting while maintaining computational efficiency.
We investigated the influence of efficacy of synaptic interaction on firing synchronization in excitatory neuronal networks. We found spike death phenomena, namely, the state of neurons transits from limit cycle to fixed point or transient state. The phenomena occur under the perturbation of excitatory synaptic interaction that has a high efficacy. We showed that the decrease of synaptic current results in spike death through depressing the feedback of sodium ionic current. In the networks with spike death property the degree of synchronization is lower and unsensitive to the heterogeneity of neurons. The mechanism of the influence is that the transition of neuron state disrupts the adjustment of the rhythm of neuron oscillation and prevents further increase of firing synchronization.
This report is concerned with the relevance of the microscopic rules, that implement individual neuronal activation, in determining the collective dynamics, under variations of the network topology. To fix ideas we study the dynamics of two cellular automaton models, commonly used, rather in-distinctively, as the building blocks of large scale neuronal networks. One model, due to Greenberg & Hastings, (GH) can be described by evolution equations mimicking an integrate-and-fire process, while the other model, due to Kinouchi & Copelli, (KC) represents an abstract branching process, where a single active neuron activates a given number of postsynaptic neurons according to a prescribed activity branching ratio. Despite the apparent similarity between the local neuronal dynamics of the two models, it is shown that they exhibit very different collective dynamics as a function of the network topology. The GH model shows qualitatively different dynamical regimes as the network topology is varied, including transients to a ground (inactive) state, continuous and discontinuous dynamical phase transitions. In contrast, the KC model only exhibits a continuous phase transition, independently of the network topology. These results highlight the importance of paying attention to the microscopic rules chosen to model the inter-neuronal interactions in large scale numerical simulations, in particular when the network topology is far from a mean field description. One such case is the extensive work being done in the context of the Human Connectome, where a wide variety of types of models are being used to understand the brain collective dynamics.
We investigate the dynamical role of inhibitory and highly connected nodes (hub) in synchronization and input processing of leaky-integrate-and-fire neural networks with short term synaptic plasticity. We take advantage of a heterogeneous mean-field approximation to encode the role of network structure and we tune the fraction of inhibitory neurons $f_I$ and their connectivity level to investigate the cooperation between hub features and inhibition. We show that, depending on $f_I$, highly connected inhibitory nodes strongly drive the synchronization properties of the overall network through dynamical transitions from synchronous to asynchronous regimes. Furthermore, a metastable regime with long memory of external inputs emerges for a specific fraction of hub inhibitory neurons, underlining the role of inhibition and connectivity also for input processing in neural networks.
We show that discrete synaptic weights can be efficiently used for learning in large scale neural systems, and lead to unanticipated computational performance. We focus on the representative case of learning random patterns with binary synapses in si ngle layer networks. The standard statistical analysis shows that this problem is exponentially dominated by isolated solutions that are extremely hard to find algorithmically. Here, we introduce a novel method that allows us to find analytical evidence for the existence of subdominant and extremely dense regions of solutions. Numerical experiments confirm these findings. We also show that the dense regions are surprisingly accessible by simple learning protocols, and that these synaptic configurations are robust to perturbations and generalize better than typical solutions. These outcomes extend to synapses with multiple states and to deeper neural architectures. The large deviation measure also suggests how to design novel algorithmic schemes for optimization based on local entropy maximization.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا