ترغب بنشر مسار تعليمي؟ اضغط هنا

Convolution Forgetting Curve Model for Repeated Learning

468   0   0.0 ( 0 )
 نشر من قبل Yanlu Xie
 تاريخ النشر 2019
والبحث باللغة English




اسأل ChatGPT حول البحث

Most of mathematic forgetting curve models fit well with the forgetting data under the learning condition of one time rather than repeated. In the paper, a convolution model of forgetting curve is proposed to simulate the memory process during learning. In this model, the memory ability (i.e. the central procedure in the working memory model) and learning material (i.e. the input in the working memory model) is regarded as the system function and the input function, respectively. The status of forgetting (i.e. the output in the working memory model) is regarded as output function or the convolution result of the memory ability and learning material. The model is applied to simulate the forgetting curves in different situations. The results show that the model is able to simulate the forgetting curves not only in one time learning condition but also in multi-times condition. The model is further verified in the experiments of Mandarin tone learning for Japanese learners. And the predicted curve fits well on the test points.



قيم البحث

اقرأ أيضاً

Multi-view learning improves the learning performance by utilizing multi-view data: data collected from multiple sources, or feature sets extracted from the same data source. This approach is suitable for primate brain state decoding using cortical n eural signals. This is because the complementary components of simultaneously recorded neural signals, local field potentials (LFPs) and action potentials (spikes), can be treated as two views. In this paper, we extended broad learning system (BLS), a recently proposed wide neural network architecture, from single-view learning to multi-view learning, and validated its performance in decoding monkeys oculomotor decision from medial frontal LFPs and spikes. We demonstrated that medial frontal LFPs and spikes in non-human primate do contain complementary information about the oculomotor decision, and that the proposed multi-view BLS is a more effective approach for decoding the oculomotor decision than several classical and state-of-the-art single-view and multi-view learning approaches.
112 - Jack A. Cook 2020
This thesis is designed to be a self-contained exposition of the neurobiological and mathematical aspects of sensory perception, memory, and learning with a bias towards olfaction. The final chapters introduce a new approach to modeling focusing more on the geometry of the system as opposed to element wise dynamics. Additionally, we construct an organism independent model for olfactory processing: something which is currently missing from the literature.
An explanation for the acquisition of word-object mappings is the associative learning in a cross-situational scenario. Here we present analytical results of the performance of a simple associative learning algorithm for acquiring a one-to-one mappin g between $N$ objects and $N$ words based solely on the co-occurrence between objects and words. In particular, a learning trial in our learning scenario consists of the presentation of $C + 1 < N$ objects together with a target word, which refers to one of the objects in the context. We find that the learning times are distributed exponentially and the learning rates are given by $ln{[frac{N(N-1)}{C + (N-1)^{2}}]}$ in the case the $N$ target words are sampled randomly and by $frac{1}{N} ln [frac{N-1}{C}] $ in the case they follow a deterministic presentation sequence. This learning performance is much superior to those exhibited by humans and more realistic learning algorithms in cross-situational experiments. We show that introduction of discrimination limitations using Webers law and forgetting reduce the performance of the associative algorithm to the human level.
290 - Zhizhong Li , Derek Hoiem 2016
When building a unified vision system or gradually adding new capabilities to a system, the usual assumption is that training data for all tasks is always available. However, as the number of tasks grows, storing and retraining on such data becomes i nfeasible. A new problem arises where we add new capabilities to a Convolutional Neural Network (CNN), but the training data for its existing capabilities are unavailable. We propose our Learning without Forgetting method, which uses only new task data to train the network while preserving the original capabilities. Our method performs favorably compared to commonly used feature extraction and fine-tuning adaption techniques and performs similarly to multitask learning that uses original task data we assume unavailable. A more surprising observation is that Learning without Forgetting may be able to replace fine-tuning with similar old and new task datasets for improved new task performance.
Scientific studies of consciousness rely on objects whose existence is assumed to be independent of any consciousness. On the contrary, we assume consciousness to be fundamental, and that one of the main features of consciousness is characterized as being other-dependent. We set up a framework which naturally subsumes this feature by defining a compact closed category where morphisms represent conscious processes. These morphisms are a composition of a set of generators, each being specified by their relations with other generators, and therefore co-dependent. The framework is general enough and fits well into a compositional model of consciousness. Interestingly, we also show how our proposal may become a step towards avoiding the hard problem of consciousness, and thereby address the combination problem of conscious experiences.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا