ترغب بنشر مسار تعليمي؟ اضغط هنا

Generalization learning in a perceptron with binary synapses

262   0   0.0 ( 0 )
 نشر من قبل Carlo Baldassi
 تاريخ النشر 2012
  مجال البحث فيزياء
والبحث باللغة English
 تأليف Carlo Baldassi




اسأل ChatGPT حول البحث

We consider the generalization problem for a perceptron with binary synapses, implementing the Stochastic Belief-Propagation-Inspired (SBPI) learning algorithm which we proposed earlier, and perform a mean-field calculation to obtain a differential equation which describes the behaviour of the device in the limit of a large number of synapses N. We show that the solving time of SBPI is of order N*sqrt(log(N)), while the similar, well-known clipped perceptron (CP) algorithm does not converge to a solution at all in the time frame we considered. The analysis gives some insight into the ongoing process and shows that, in this context, the SBPI algorithm is equivalent to a new, simpler algorithm, which only differs from the CP algorithm by the addition of a stochastic, unsupervised meta-plastic reinforcement process, whose rate of application must be less than sqrt(2/(pi * N)) for the learning to be achieved effectively. The analytical results are confirmed by simulations.



قيم البحث

اقرأ أيضاً

We show that discrete synaptic weights can be efficiently used for learning in large scale neural systems, and lead to unanticipated computational performance. We focus on the representative case of learning random patterns with binary synapses in si ngle layer networks. The standard statistical analysis shows that this problem is exponentially dominated by isolated solutions that are extremely hard to find algorithmically. Here, we introduce a novel method that allows us to find analytical evidence for the existence of subdominant and extremely dense regions of solutions. Numerical experiments confirm these findings. We also show that the dense regions are surprisingly accessible by simple learning protocols, and that these synaptic configurations are robust to perturbations and generalize better than typical solutions. These outcomes extend to synapses with multiple states and to deeper neural architectures. The large deviation measure also suggests how to design novel algorithmic schemes for optimization based on local entropy maximization.
Active learning is a branch of machine learning that deals with problems where unlabeled data is abundant yet obtaining labels is expensive. The learning algorithm has the possibility of querying a limited number of samples to obtain the correspondin g labels, subsequently used for supervised learning. In this work, we consider the task of choosing the subset of samples to be labeled from a fixed finite pool of samples. We assume the pool of samples to be a random matrix and the ground truth labels to be generated by a single-layer teacher random neural network. We employ replica methods to analyze the large deviations for the accuracy achieved after supervised learning on a subset of the original pool. These large deviations then provide optimal achievable performance boundaries for any active learning algorithm. We show that the optimal learning performance can be efficiently approached by simple message-passing active learning algorithms. We also provide a comparison with the performance of some other popular active learning strategies.
102 - N. Leroux 2020
Exploiting the physics of nanoelectronic devices is a major lead for implementing compact, fast, and energy efficient artificial intelligence. In this work, we propose an original road in this direction, where assemblies of spintronic resonators used as artificial synapses can classify an-alogue radio-frequency signals directly without digitalization. The resonators convert the ra-dio-frequency input signals into direct voltages through the spin-diode effect. In the process, they multiply the input signals by a synaptic weight, which depends on their resonance fre-quency. We demonstrate through physical simulations with parameters extracted from exper-imental devices that frequency-multiplexed assemblies of resonators implement the corner-stone operation of artificial neural networks, the Multiply-And-Accumulate (MAC), directly on microwave inputs. The results show that even with a non-ideal realistic model, the outputs obtained with our architecture remain comparable to that of a traditional MAC operation. Us-ing a conventional machine learning framework augmented with equations describing the physics of spintronic resonators, we train a single layer neural network to classify radio-fre-quency signals encoding 8x8 pixel handwritten digits pictures. The spintronic neural network recognizes the digits with an accuracy of 99.96 %, equivalent to purely software neural net-works. This MAC implementation offers a promising solution for fast, low-power radio-fre-quency classification applications, and a new building block for spintronic deep neural net-works.
We report a transition from asynchronous to oscillatory behaviour in balanced inhibitory networks for class I and II neurons with instantaneous synapses. Collective oscillations emerge for sufficiently connected networks. Their origin is understood i n terms of a recently developed mean-field model, whose stable solution is a focus. Microscopic irregular firings, due to balance, trigger sustained oscillations by exciting the relaxation dynamics towards the macroscopic focus. The same mechanism induces in balanced excitatory-inhibitory networks quasi-periodic collective oscillations.
Two neurons coupled by unreliable synapses are modeled by leaky integrate-and-fire neurons and stochastic on-off synapses. The dynamics is mapped to an iterated function system. Numerical calculations yield a multifractal distribution of interspike i ntervals. The Haussdorf, entropy and correlation dimensions are calculated as a function of synaptic strength and transmission probability.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا