ترغب بنشر مسار تعليمي؟ اضغط هنا

Cross-situational word learning is based on the notion that a learner can determine the referent of a word by finding something in common across many observed uses of that word. Here we propose an adaptive learning algorithm that contains a parameter that controls the strength of the reinforcement applied to associations between concurrent words and referents, and a parameter that regulates inference, which includes built-in biases, such as mutual exclusivity, and information of past learning events. By adjusting these parameters so that the model predictions agree with data from representative experiments on cross-situational word learning, we were able to explain the learning strategies adopted by the participants of those experiments in terms of a trade-off between reinforcement and inference. These strategies can vary wildly depending on the conditions of the experiments. For instance, for fast mapping experiments (i.e., the correct referent could, in principle, be inferred in a single observation) inference is prevalent, whereas for segregated contextual diversity experiments (i.e., the referents are separated in groups and are exhibited with members of their groups only) reinforcement is predominant. Other experiments are explained with more balanced doses of reinforcement and inference.
An explanation for the acquisition of word-object mappings is the associative learning in a cross-situational scenario. Here we present analytical results of the performance of a simple associative learning algorithm for acquiring a one-to-one mappin g between $N$ objects and $N$ words based solely on the co-occurrence between objects and words. In particular, a learning trial in our learning scenario consists of the presentation of $C + 1 < N$ objects together with a target word, which refers to one of the objects in the context. We find that the learning times are distributed exponentially and the learning rates are given by $ln{[frac{N(N-1)}{C + (N-1)^{2}}]}$ in the case the $N$ target words are sampled randomly and by $frac{1}{N} ln [frac{N-1}{C}] $ in the case they follow a deterministic presentation sequence. This learning performance is much superior to those exhibited by humans and more realistic learning algorithms in cross-situational experiments. We show that introduction of discrimination limitations using Webers law and forgetting reduce the performance of the associative algorithm to the human level.
We study analytically a variant of the one-dimensional majority-vote model in which the individual retains its opinion in case there is a tie among the neighbors opinions. The individuals are fixed in the sites of a ring of size $L$ and can interact with their nearest neighbors only. The interesting feature of this model is that it exhibits an infinity of spatially heterogeneous absorbing configurations for $L to infty$ whose statistical properties we probe analytically using a mean-field framework based on the decomposition of the $L$-site joint probability distribution into the $n$-contiguous-site joint distributions, the so-called $n$-site approximation. To describe the broken-ergodicity steady state of the model we solve analytically the mean-field dynamic equations for arbitrary time $t$ in the cases n=3 and 4. The asymptotic limit $t to infty$ reveals the mapping between the statistical properties of the random initial configurations and those of the final absorbing configurations. For the pair approximation ($n=2$) we derive that mapping using a trick that avoids solving the full dynamics. Most remarkably, we find that the predictions of the 4-site approximation reduce to those of the 3-site in the case of expectations involving three contiguous sites. In addition, those expectations fit the Monte Carlo data perfectly and so we conjecture that they are in fact the exact expectations for the one-dimensional majority-vote model.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا