Do you want to publish a course? Click here

Deep neuroethology of a virtual rodent

59   0   0.0 ( 0 )
 Added by Josh Merel
 Publication date 2019
  fields Biology
and research's language is English




Ask ChatGPT about the research

Parallel developments in neuroscience and deep learning have led to mutually productive exchanges, pushing our understanding of real and artificial neural networks in sensory and cognitive systems. However, this interaction between fields is less developed in the study of motor control. In this work, we develop a virtual rodent as a platform for the grounded study of motor activity in artificial models of embodied control. We then use this platform to study motor activity across contexts by training a model to solve four complex tasks. Using methods familiar to neuroscientists, we describe the behavioral representations and algorithms employed by different layers of the network using a neuroethological approach to characterize motor activity relative to the rodents behavior and goals. We find that the model uses two classes of representations which respectively encode the task-specific behavioral strategies and task-invariant behavioral kinematics. These representations are reflected in the sequential activity and population dynamics of neural subpopulations. Overall, the virtual rodent facilitates grounded collaborations between deep reinforcement learning and motor neuroscience.



rate research

Read More

Humans face the task of balancing dynamic systems near an unstable equilibrium repeatedly throughout their lives. Much research has been aimed at understanding the mechanisms of intermittent control in the context of human balance control. The present paper deals with one of the recent developments in the theory of human intermittent control, namely, the double-well model of noise-driven control activation. We demonstrate that the double-well model can reproduce the whole range of experimentally observed distributions under different conditions. Moreover, we show that a slight change in the noise intensity parameter leads to a sudden shift of the action point distribution shape, that is, a phase transition is observed.
Response delay is an inherent and essential part of human actions. In the context of human balance control, the response delay is traditionally modeled using the formalism of delay-differential equations, which adopts the approximation of fixed delay. However, experimental studies revealing substantial variability, adaptive anticipation, and non-stationary dynamics of response delay provide evidence against this approximation. In this paper, we call for development of principally new mathematical formalism describing human response delay. To support this, we present the experimental data from a simple virtual stick balancing task. Our results demonstrate that human response delay is a widely distributed random variable with complex properties, which can exhibit oscillatory and adaptive dynamics characterized by long-range correlations. Given this, we argue that the fixed-delay approximation ignores essential properties of human response, and conclude with possible directions for future developments of new mathematical notions describing human control.
Biomechanical modeling of tissue deformation can be used to simulate different scenarios of longitudinal brain evolution. In this work,we present a deep learning framework for hyper-elastic strain modelling of brain atrophy, during healthy ageing and in Alzheimers Disease. The framework directly models the effects of age, disease status, and scan interval to regress regional patterns of atrophy, from which a strain-based model estimates deformations. This model is trained and validated using 3D structural magnetic resonance imaging data from the ADNI cohort. Results show that the framework can estimate realistic deformations, following the known course of Alzheimers disease, that clearly differentiate between healthy and demented patterns of ageing. This suggests the framework has potential to be incorporated into explainable models of disease, for the exploration of interventions and counterfactual examples.
A central challenge in neuroscience is to understand neural computations and circuit mechanisms that underlie the encoding of ethologically relevant, natural stimuli. In multilayered neural circuits, nonlinear processes such as synaptic transmission and spiking dynamics present a significant obstacle to the creation of accurate computational models of responses to natural stimuli. Here we demonstrate that deep convolutional neural networks (CNNs) capture retinal responses to natural scenes nearly to within the variability of a cells response, and are markedly more accurate than linear-nonlinear (LN) models and Generalized Linear Models (GLMs). Moreover, we find two additional surprising properties of CNNs: they are less susceptible to overfitting than their LN counterparts when trained on small amounts of data, and generalize better when tested on stimuli drawn from a different distribution (e.g. between natural scenes and white noise). Examination of trained CNNs reveals several properties. First, a richer set of feature maps is necessary for predicting the responses to natural scenes compared to white noise. Second, temporally precise responses to slowly varying inputs originate from feedforward inhibition, similar to known retinal mechanisms. Third, the injection of latent noise sources in intermediate layers enables our model to capture the sub-Poisson spiking variability observed in retinal ganglion cells. Fourth, augmenting our CNNs with recurrent lateral connections enables them to capture contrast adaptation as an emergent property of accurately describing retinal responses to natural scenes. These methods can be readily generalized to other sensory modalities and stimulus ensembles. Overall, this work demonstrates that CNNs not only accurately capture sensory circuit responses to natural scenes, but also yield information about the circuits internal structure and function.
Local field potentials (LFPs) sampled with extracellular electrodes are frequently used as a measure of population neuronal activity. However, relating such measurements to underlying neuronal behaviour and connectivity is non-trivial. To help study this link, we developed the Virtual Electrode Recording Tool for EXtracellular potentials (VERTEX). We first identified a reduced neuron model that retained the spatial and frequency filtering characteristics of extracellular potentials from neocortical neurons. We then developed VERTEX as an easy-to-use Matlab tool for simulating LFPs from large populations (>100 000 neurons). A VERTEX-based simulation successfully reproduced features of the LFPs from an in vitro multi-electrode array recording of macaque neocortical tissue. Our model, with virtual electrodes placed anywhere in 3D, allows direct comparisons with the in vitro recording setup. We envisage that VERTEX will stimulate experimentalists, clinicians, and computational neuroscientists to use models to understand the mechanisms underlying measured brain dynamics in health and disease.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا