ترغب بنشر مسار تعليمي؟ اضغط هنا

Visual response properties of MSTd emerge from a sparse population code

353   0   0.0 ( 0 )
 نشر من قبل Michael Beyeler
 تاريخ النشر 2017
  مجال البحث علم الأحياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Neurons in the dorsal subregion of the medial superior temporal (MSTd) area respond to large, complex patterns of retinal flow, implying a role in the analysis of self-motion. Some neurons are selective for the expanding radial motion that occurs as an observer moves through the environment (heading), and computational models can account for this finding. However, ample evidence suggests that MSTd neurons may exhibit a continuum of visual response selectivity to large-field motion stimuli, but the underlying computational principles by which these response properties are derived remain poorly understood. Here we describe a computational model of MSTd based on the hypothesis that neurons in MSTd efficiently encode the continuum of large-field retinal flow patterns on the basis of inputs received from neurons in MT, with receptive fields that resemble basis vectors recovered with nonnegative matrix factorization (NMF). These assumptions are sufficient to quantitatively simulate neurophysiological response properties of MSTd cells such as radial, circular, and spiral motion tuning, suggesting that these properties might simply be a by-product of MSTd neurons performing dimensionality reduction on their inputs. At the population level, model MSTd accurately predicts heading using a sparse distributed code, consistent with the idea that biological MSTd might operate in a sparseness regime well-suited to efficiently encode a number of self-motion variables. The present work provides an alternative to the template-model view of MSTd, and offers a biologically plausible account of the receptive field structure across a wide range of visual response properties in MSTd.



قيم البحث

اقرأ أيضاً

74 - Cesar Ravello 2016
Natural images follow statistics inherited by the structure of our physical (visual) environment. In particular, a prominent facet of this structure is that images can be described by a relatively sparse number of features. To investigate the role of this sparseness in the efficiency of the neural code, we designed a new class of random textured stimuli with a controlled sparseness value inspired by measurements of natural images. Then, we tested the impact of this sparseness parameter on the firing pattern observed in a population of retinal ganglion cells recorded ex vivo in the retina of a rodent, the Octodon degus. These recordings showed in particular that the reliability of spike timings varies with respect to the sparseness with globally a similar trend than the distribution of sparseness statistics observed in natural images. These results suggest that the code represented in the spike pattern of ganglion cells may adapt to this aspect of the statistics of natural images.
Primary visual cortex (V1) is the first stage of cortical image processing, and a major effort in systems neuroscience is devoted to understanding how it encodes information about visual stimuli. Within V1, many neurons respond selectively to edges o f a given preferred orientation: these are known as simple or complex cells, and they are well-studied. Other neurons respond to localized center-surround image features. Still others respond selectively to certain image stimuli, but the specific features that excite them are unknown. Moreover, even for the simple and complex cells-- the best-understood V1 neurons-- it is challenging to predict how they will respond to natural image stimuli. Thus, there are important gaps in our understanding of how V1 encodes images. To fill this gap, we train deep convolutional neural networks to predict the firing rates of V1 neurons in response to natural image stimuli, and find that 15% of these neurons are within 10% of their theoretical limit of predictability. For these well predicted neurons, we invert the predictor network to identify the image features (receptive fields) that cause the V1 neurons to spike. In addition to those with previously-characterized receptive fields (Gabor wavelet and center-surround), we identify neurons that respond predictably to higher-level textural image features that are not localized to any particular region of the image.
Response variability, as measured by fluctuating responses upon repeated performance of trials, is a major component of neural responses, and its characterization is key to interpret high dimensional population recordings. Response variability and co variability display predictable changes upon changes in stimulus and cognitive or behavioral state, providing an opportunity to test the predictive power of models of neural variability. Still, there is little agreement on which model to use as a building block for population-level analyses, and models of variability are often treated as a subject of choice. We investigate two competing models, the Doubly Stochastic Poisson (DSP) model assuming stochasticity at spike generation, and the Rectified Gaussian (RG) model that traces variability back to membrane potential variance, to analyze stimulus-dependent modulation of response statistics. Using a model of a pair of neurons, we demonstrate that the two models predict similar single-cell statistics. However, DSP and RG models have contradicting predictions on the joint statistics of spiking responses. In order to test the models against data, we build a population model to simulate stimulus change-related modulations in response statistics. We use unit recordings from the primary visual cortex of monkeys to show that while model predictions for variance are qualitatively similar to experimental data, only the RG models predictions are compatible with joint statistics. These results suggest that models using Poisson-like variability might fail to capture important properties of response statistics. We argue that membrane potential-level modelling of stochasticity provides an efficient strategy to model correlations.
The ability of the organism to distinguish between various stimuli is limited by the structure and noise in the population code of its sensory neurons. Here we infer a distance measure on the stimulus space directly from the recorded activity of 100 neurons in the salamander retina. In contrast to previously used measures of stimulus similarity, this neural metric tells us how distinguishable a pair of stimulus clips is to the retina, given the noise in the neural population response. We show that the retinal distance strongly deviates from Euclidean, or any static metric, yet has a simple structure: we identify the stimulus features that the neural population is jointly sensitive to, and show the SVM-like kernel function relating the stimulus and neural response spaces. We show that the non-Euclidean nature of the retinal distance has important consequences for neural decoding.
Simulating and imitating the neuronal network of humans or mammals is a popular topic that has been explored for many years in the fields of pattern recognition and computer vision. Inspired by neuronal conduction characteristics in the primary visua l cortex of cats, pulse-coupled neural networks (PCNNs) can exhibit synchronous oscillation behavior, which can process digital images without training. However, according to the study of single cells in the cat primary visual cortex, when a neuron is stimulated by an external periodic signal, the interspike-interval (ISI) distributions represent a multimodal distribution. This phenomenon cannot be explained by all PCNN models. By analyzing the working mechanism of the PCNN, we present a novel neuron model of the primary visual cortex consisting of a continuous-coupled neural network (CCNN). Our model inherited the threshold exponential decay and synchronous pulse oscillation property of the original PCNN model, and it can exhibit chaotic behavior consistent with the testing results of cat primary visual cortex neurons. Therefore, our CCNN model is closer to real visual neural networks. For image segmentation tasks, the algorithm based on CCNN model has better performance than the state-of-art of visual cortex neural network model. The strength of our approach is that it helps neurophysiologists further understand how the primary visual cortex works and can be used to quantitatively predict the temporal-spatial behavior of real neural networks. CCNN may also inspire engineers to create brain-inspired deep learning networks for artificial intelligence purposes.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا