ترغب بنشر مسار تعليمي؟ اضغط هنا

Memory semantization through perturbed and adversarial dreaming

144   0   0.0 ( 0 )
 نشر من قبل Nicolas Deperrois
 تاريخ النشر 2021
والبحث باللغة English




اسأل ChatGPT حول البحث

Classical theories of memory consolidation emphasize the importance of replay in extracting semantic information from episodic memories. However, the characteristic creative nature of dreams suggests that memory semantization may go beyond merely replaying previous experiences. We propose that rapid-eye-movement (REM) dreaming is essential for efficient memory semantization by randomly combining episodic memories to create new, virtual sensory experiences. We support this hypothesis by implementing a cortical architecture with hierarchically organized feedforward and feedback pathways, inspired by generative adversarial networks (GANs). Learning in our model is organized across three different global brain states mimicking wakefulness, non-REM (NREM) and REM sleep, optimizing different, but complementary objective functions. We train the model in an unsupervised fashion on standard datasets of natural images and evaluate the quality of the learned representations. Our results suggest that adversarial dreaming during REM sleep is essential for extracting memory contents, while perturbed dreaming during NREM sleep improves robustness of the latent representation to noisy sensory inputs. The model provides a new computational perspective on sleep states, memory replay and dreams and suggests a cortical implementation of GANs.



قيم البحث

اقرأ أيضاً

When one is presented with an item or a face, one can sometimes have a sense of recognition without being able to recall where or when one has encountered it before. This sense of recognition is known as familiarity. Following previous computational models of familiarity memory we investigate the dynamical properties of familiarity discrimination, and contrast two different familiarity discriminators: one based on the energy of the neural network, and the other based on the time derivative of the energy. We show how the familiarity signal decays after a stimulus is presented, and examine the robustness of the familiarity discriminator in the presence of random fluctuations in neural activity. For both discriminators we establish, via a combined method of signal-to-noise ratio and mean field analysis, how the maximum number of successfully discriminated stimuli depends on the noise level.
Working memory (WM) allows information to be stored and manipulated over short time scales. Performance on WM tasks is thought to be supported by the frontoparietal system (FPS), the default mode system (DMS), and interactions between them. Yet littl e is known about how these systems and their interactions relate to individual differences in WM performance. We address this gap in knowledge using functional MRI data acquired during the performance of a 2-back WM task, as well as diffusion tensor imaging data collected in the same individuals. We show that the strength of functional interactions between the FPS and DMS during task engagement is inversely correlated with WM performance, and that this strength is modulated by the activation of FPS regions but not DMS regions. Next, we use a clustering algorithm to identify two distinct subnetworks of the FPS, and find that these subnetworks display distinguishable patterns of gene expression. Activity in one subnetwork is positively associated with the strength of FPS-DMS functional interactions, while activity in the second subnetwork is negatively associated. Further, the pattern of structural linkages of these subnetworks explains their differential capacity to influence the strength of FPS-DMS functional interactions. To determine whether these observations could provide a mechanistic account of large-scale neural underpinnings of WM, we build a computational model of the system composed of coupled oscillators. Modulating the amplitude of the subnetworks in the model causes the expected change in the strength of FPS-DMS functional interactions, thereby offering support for a mechanism in which subnetwork activity tunes functional interactions. Broadly, our study presents a holistic account of how regional activity, functional interactions, and structural linkages together support individual differences in WM in humans.
The ability to store continuous variables in the state of a biological system (e.g. a neural network) is critical for many behaviours. Most models for implementing such a memory manifold require hand-crafted symmetries in the interactions or precise fine-tuning of parameters. We present a general principle that we refer to as {it frozen stabilisation}, which allows a family of neural networks to self-organise to a critical state exhibiting memory manifolds without parameter fine-tuning or symmetries. These memory manifolds exhibit a true continuum of memory states and can be used as general purpose integrators for inputs aligned with the manifold. Moreover, frozen stabilisation allows robust memory manifolds in small networks, and this is relevant to debates of implementing continuous attractors with a small number of neurons in light of recent experimental discoveries.
We propose a single chunk model of long-term memory that combines the basic features of the ACT-R theory and the multiple trace memory architecture. The pivot point of the developed theory is a mathematical description of the creation of new memory t races caused by learning a certain fragment of information pattern and affected by the fragments of this pattern already retained by the current moment of time. Using the available psychological and physiological data these constructions are justified. The final equation governing the learning and forgetting processes is constructed in the form of the differential equation with the Caputo type fractional time derivative. Several characteristic situations of the learning (continuous and discontinuous) and forgetting processes are studied numerically. In particular, it is demonstrated that, first, the learning and forgetting exponents of the corresponding power laws of the memory fractional dynamics should be regarded as independent system parameters. Second, as far as the spacing effects are concerned, the longer the discontinuous learning process, the longer the time interval within which a subject remembers the information without its considerable lost. Besides, the latter relationship is a linear proportionality.
We analyse the storage and retrieval capacity in a recurrent neural network of spiking integrate and fire neurons. In the model we distinguish between a learning mode, during which the synaptic connections change according to a Spike-Timing Dependent Plasticity (STDP) rule, and a recall mode, in which connections strengths are no more plastic. Our findings show the ability of the network to store and recall periodic phase coded patterns a small number of neurons has been stimulated. The self sustained dynamics selectively gives an oscillating spiking activity that matches one of the stored patterns, depending on the initialization of the network.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا