ترغب بنشر مسار تعليمي؟ اضغط هنا

Minimal model of associative learning for cross-situational lexicon acquisition

102   0   0.0 ( 0 )
 نشر من قبل Jose Fontanari
 تاريخ النشر 2012
والبحث باللغة English




اسأل ChatGPT حول البحث

An explanation for the acquisition of word-object mappings is the associative learning in a cross-situational scenario. Here we present analytical results of the performance of a simple associative learning algorithm for acquiring a one-to-one mapping between $N$ objects and $N$ words based solely on the co-occurrence between objects and words. In particular, a learning trial in our learning scenario consists of the presentation of $C + 1 < N$ objects together with a target word, which refers to one of the objects in the context. We find that the learning times are distributed exponentially and the learning rates are given by $ln{[frac{N(N-1)}{C + (N-1)^{2}}]}$ in the case the $N$ target words are sampled randomly and by $frac{1}{N} ln [frac{N-1}{C}] $ in the case they follow a deterministic presentation sequence. This learning performance is much superior to those exhibited by humans and more realistic learning algorithms in cross-situational experiments. We show that introduction of discrimination limitations using Webers law and forgetting reduce the performance of the associative algorithm to the human level.

قيم البحث

اقرأ أيضاً

Cross-situational word learning is based on the notion that a learner can determine the referent of a word by finding something in common across many observed uses of that word. Here we propose an adaptive learning algorithm that contains a parameter that controls the strength of the reinforcement applied to associations between concurrent words and referents, and a parameter that regulates inference, which includes built-in biases, such as mutual exclusivity, and information of past learning events. By adjusting these parameters so that the model predictions agree with data from representative experiments on cross-situational word learning, we were able to explain the learning strategies adopted by the participants of those experiments in terms of a trade-off between reinforcement and inference. These strategies can vary wildly depending on the conditions of the experiments. For instance, for fast mapping experiments (i.e., the correct referent could, in principle, be inferred in a single observation) inference is prevalent, whereas for segregated contextual diversity experiments (i.e., the referents are separated in groups and are exhibited with members of their groups only) reinforcement is predominant. Other experiments are explained with more balanced doses of reinforcement and inference.
Despite rapid advances in machine learning tools, the majority of neural decoding approaches still use traditional methods. Modern machine learning tools, which are versatile and easy to use, have the potential to significantly improve decoding perfo rmance. This tutorial describes how to effectively apply these algorithms for typical decoding problems. We provide descriptions, best practices, and code for applying common machine learning methods, including neural networks and gradient boosting. We also provide detailed comparisons of the performance of various methods at the task of decoding spiking activity in motor cortex, somatosensory cortex, and hippocampus. Modern methods, particularly neural networks and ensembles, significantly outperform traditional approaches, such as Wiener and Kalman filters. Improving the performance of neural decoding algorithms allows neuroscientists to better understand the information contained in a neural population and can help advance engineering applications such as brain machine interfaces.
This paper introduces Associative Compression Networks (ACNs), a new framework for variational autoencoding with neural networks. The system differs from existing variational autoencoders (VAEs) in that the prior distribution used to model each code is conditioned on a similar code from the dataset. In compression terms this equates to sequentially transmitting the dataset using an ordering determined by proximity in latent space. Since the prior need only account for local, rather than global variations in the latent space, the coding cost is greatly reduced, leading to rich, informative codes. Crucially, the codes remain informative when powerful, autoregressive decoders are used, which we argue is fundamentally difficult with normal VAEs. Experimental results on MNIST, CIFAR-10, ImageNet and CelebA show that ACNs discover high-level latent features such as object class, writing style, pose and facial expression, which can be used to cluster and classify the data, as well as to generate diverse and convincing samples. We conclude that ACNs are a promising new direction for representation learning: one that steps away from IID modelling, and towards learning a structured description of the dataset as a whole.
Neurodegenerative diseases and traumatic brain injuries (TBI) are among the main causes of cognitive dysfunction in humans. Both manifestations exhibit the extensive presence of focal axonal swellings (FAS). FAS compromises the information encoded in spike trains, thus leading to potentially severe functional deficits. Complicating our understanding of the impact of FAS is our inability to access small scale injuries with non-invasive methods, the overall complexity of neuronal pathologies, and our limited knowledge of how networks process biological signals. Building on Hopfields pioneering work, we extend a model for associative memory to account for FAS and its impact on memory encoding. We calibrate all FAS parameters from biophysical observations of their statistical distribution and size, providing a framework to simulate the effects of brain disorders on memory recall performance. A face recognition example is used to demonstrate and validate the functionality of the novel model. Our results link memory recall ability to observed FAS statistics, allowing for a description of different stages of brain disorders within neuronal networks. This provides a first theoretical model to bridge experimental observations of FAS in neurodegeneration and TBI with compromised memory recall, thus closing the large gap between theory and experiment on how biological signals are processed in damaged, high-dimensional functional networks. The work further lends new insight into positing diagnostic tools to measure cognitive deficits.
467 - Yanlu Xie , Yue Chen , Man Li 2019
Most of mathematic forgetting curve models fit well with the forgetting data under the learning condition of one time rather than repeated. In the paper, a convolution model of forgetting curve is proposed to simulate the memory process during learni ng. In this model, the memory ability (i.e. the central procedure in the working memory model) and learning material (i.e. the input in the working memory model) is regarded as the system function and the input function, respectively. The status of forgetting (i.e. the output in the working memory model) is regarded as output function or the convolution result of the memory ability and learning material. The model is applied to simulate the forgetting curves in different situations. The results show that the model is able to simulate the forgetting curves not only in one time learning condition but also in multi-times condition. The model is further verified in the experiments of Mandarin tone learning for Japanese learners. And the predicted curve fits well on the test points.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا