Do you want to publish a course? Click here

Training recurrent networks to generate hypotheses about how the brain solves hard navigation problems

110   0   0.0 ( 0 )
 Publication date 2016
  fields Biology
and research's language is English




Ask ChatGPT about the research

Self-localization during navigation with noisy sensors in an ambiguous world is computationally challenging, yet animals and humans excel at it. In robotics, Simultaneous Location and Mapping (SLAM) algorithms solve this problem though joint sequential probabilistic inference of their own coordinates and those of external spatial landmarks. We generate the first neural solution to the SLAM problem by training recurrent LSTM networks to perform a set of hard 2D navigation tasks that include generalization to completely novel trajectories and environments. The hidden unit representations exhibit several key properties of hippocampal place cells, including stable tuning curves that remap between environments. Our result is also a proof of concept for end-to-end-learning of a SLAM algorithm using recurrent networks, and a demonstration of why this approach may have some advantages for robotic SLAM.



rate research

Read More

A popular theory of perceptual processing holds that the brain learns both a generative model of the world and a paired recognition model using variational Bayesian inference. Most hypotheses of how the brain might learn these models assume that neurons in a population are conditionally independent given their common inputs. This simplification is likely not compatible with the type of local recurrence observed in the brain. Seeking an alternative that is compatible with complex inter-dependencies yet consistent with known biology, we argue here that the cortex may learn with an adversarial algorithm. Many observable symptoms of this approach would resemble known neural phenomena, including wake/sleep cycles and oscillations that vary in magnitude with surprise, and we describe how further predictions could be tested. We illustrate the idea on recurrent neural networks trained to model image and video datasets. This framework for learning brings variational inference closer to neuroscience and yields multiple testable hypotheses.
260 - Samuel J. Gershman 2019
The free energy principle has been proposed as a unifying account of brain function. It is closely related, and in some cases subsumes, earlier unifying ideas such as Bayesian inference, predictive coding, and active learning. This article clarifies these connections, teasing apart distinctive and shared predictions.
122 - Sophia U. David 2020
When presented with information of any type, from music to language to mathematics, the human mind subconsciously arranges it into a network. A network puts pieces of information like musical notes, syllables or mathematical concepts into context by linking them together. These networks help our minds organize information and anticipate what is coming. Here we present two questions about network building. 1) Can humans more easily learn some types of networks than others? 2) Do humans find some links between ideas more surprising than others? The answer to both questions is Yes, and we explain why. The findings provide much-needed insight into the ways that humans learn about the networked world around them. Moreover, the study paves the way for future efforts seeking to optimize how information is presented to accelerate human learning.
This paper proposes a novel topological learning framework that can integrate brain networks of different sizes and topology through persistent homology. This is possible through the introduction of a new topological loss function that enables such challenging task. The use of the proposed loss function bypasses the intrinsic computational bottleneck associated with matching networks. We validate the method in extensive statistical simulations with ground truth to assess the effectiveness of the topological loss in discriminating networks with different topology. The method is further applied to a twin brain imaging study in determining if the brain network is genetically heritable. The challenge is in overlaying the topologically different functional brain networks obtained from the resting-state functional MRI (fMRI) onto the template structural brain network obtained through the diffusion MRI (dMRI).
89 - Moo K. Chung 2021
Recent developments in graph theoretic analysis of complex networks have led to deeper understanding of brain networks. Many complex networks show similar macroscopic behaviors despite differences in the microscopic details. Probably two most often observed characteristics of complex networks are scale-free and small-world properties. In this paper, we will explore whether brain networks follow scale-free and small-worldness among other graph theory properties.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا