ترغب بنشر مسار تعليمي؟ اضغط هنا

Differential response of the retinal neural code with respect to the sparseness of natural images

75   0   0.0 ( 0 )
 نشر من قبل Laurent Perrinet
 تاريخ النشر 2016
  مجال البحث علم الأحياء
والبحث باللغة English
 تأليف Cesar Ravello




اسأل ChatGPT حول البحث

Natural images follow statistics inherited by the structure of our physical (visual) environment. In particular, a prominent facet of this structure is that images can be described by a relatively sparse number of features. To investigate the role of this sparseness in the efficiency of the neural code, we designed a new class of random textured stimuli with a controlled sparseness value inspired by measurements of natural images. Then, we tested the impact of this sparseness parameter on the firing pattern observed in a population of retinal ganglion cells recorded ex vivo in the retina of a rodent, the Octodon degus. These recordings showed in particular that the reliability of spike timings varies with respect to the sparseness with globally a similar trend than the distribution of sparseness statistics observed in natural images. These results suggest that the code represented in the spike pattern of ganglion cells may adapt to this aspect of the statistics of natural images.



قيم البحث

اقرأ أيضاً

A central challenge in neuroscience is to understand neural computations and circuit mechanisms that underlie the encoding of ethologically relevant, natural stimuli. In multilayered neural circuits, nonlinear processes such as synaptic transmission and spiking dynamics present a significant obstacle to the creation of accurate computational models of responses to natural stimuli. Here we demonstrate that deep convolutional neural networks (CNNs) capture retinal responses to natural scenes nearly to within the variability of a cells response, and are markedly more accurate than linear-nonlinear (LN) models and Generalized Linear Models (GLMs). Moreover, we find two additional surprising properties of CNNs: they are less susceptible to overfitting than their LN counterparts when trained on small amounts of data, and generalize better when tested on stimuli drawn from a different distribution (e.g. between natural scenes and white noise). Examination of trained CNNs reveals several properties. First, a richer set of feature maps is necessary for predicting the responses to natural scenes compared to white noise. Second, temporally precise responses to slowly varying inputs originate from feedforward inhibition, similar to known retinal mechanisms. Third, the injection of latent noise sources in intermediate layers enables our model to capture the sub-Poisson spiking variability observed in retinal ganglion cells. Fourth, augmenting our CNNs with recurrent lateral connections enables them to capture contrast adaptation as an emergent property of accurately describing retinal responses to natural scenes. These methods can be readily generalized to other sensory modalities and stimulus ensembles. Overall, this work demonstrates that CNNs not only accurately capture sensory circuit responses to natural scenes, but also yield information about the circuits internal structure and function.
Natural images follow statistics inherited by the structure of our physical (visual) environment. In particular, a prominent facet of this structure is that images can be described by a relatively sparse number of features. We designed a sparse codin g algorithm biologically-inspired by the architecture of the primary visual cortex. We show here that coefficients of this representation exhibit a heavy-tailed distribution. For each image, the parameters of this distribution characterize sparseness and vary from image to image. To investigate the role of this sparseness, we designed a new class of random textured stimuli with a controlled sparseness value inspired by our measurements on natural images. Then, we provide with a method to synthesize random textures images with a given statistics for sparseness that matches that of some given class of natural images and provide perspectives for their use in neurophysiology.
Primary visual cortex (V1) is the first stage of cortical image processing, and a major effort in systems neuroscience is devoted to understanding how it encodes information about visual stimuli. Within V1, many neurons respond selectively to edges o f a given preferred orientation: these are known as simple or complex cells, and they are well-studied. Other neurons respond to localized center-surround image features. Still others respond selectively to certain image stimuli, but the specific features that excite them are unknown. Moreover, even for the simple and complex cells-- the best-understood V1 neurons-- it is challenging to predict how they will respond to natural image stimuli. Thus, there are important gaps in our understanding of how V1 encodes images. To fill this gap, we train deep convolutional neural networks to predict the firing rates of V1 neurons in response to natural image stimuli, and find that 15% of these neurons are within 10% of their theoretical limit of predictability. For these well predicted neurons, we invert the predictor network to identify the image features (receptive fields) that cause the V1 neurons to spike. In addition to those with previously-characterized receptive fields (Gabor wavelet and center-surround), we identify neurons that respond predictably to higher-level textural image features that are not localized to any particular region of the image.
132 - Laurent Perrinet 2009
If modern computers are sometimes superior to humans in some specialized tasks such as playing chess or browsing a large database, they cant beat the efficiency of biological vision for such simple tasks as recognizing and following an object in a co mplex cluttered background. We present in this paper our attempt at outlining the dynamical, parallel and event-based representation for vision in the architecture of the central nervous system. We will illustrate this on static natural images by showing that in a signal matching framework, a L/LN (linear/non-linear) cascade may efficiently transform a sensory signal into a neural spiking signal and we will apply this framework to a model retina. However, this code gets redundant when using an over-complete basis as is necessary for modeling the primary visual cortex: we therefore optimize the efficiency cost by increasing the sparseness of the code. This is implemented by propagating and canceling redundant information using lateral interactions. We compare the efficiency of this representation in terms of compression as the reconstruction quality as a function of the coding length. This will correspond to a modification of the Matching Pursuit algorithm where the ArgMax function is optimized for competition, or Competition Optimized Matching Pursuit (COMP). We will in particular focus on bridging neuroscience and image processing and on the advantages of such an interdisciplinary approach.
How natural communication sounds are spatially represented across the inferior colliculus, the main center of convergence for auditory information in the midbrain, is not known. The neural representation of the acoustic stimuli results from the inter play of locally differing input and the organization of spectral and temporal neural preferences that change gradually across the nucleus. This raises the question how similar the neural representation of the communication sounds is across these gradients of neural preferences, and whether it also changes gradually. Multi-unit cluster spike trains were recorded from guinea pigs presented with a spectrotemporally rich set of eleven species-specific communication sounds. Using cross-correlation, we analyzed the response similarity of spiking activity across a broad frequency range for similarly and differently frequency-tuned neurons. Furthermore, we separated the contribution of the stimulus to the correlations to investigate whether similarity is only attributable to the stimulus, or, whether interactions exist between the multi-unit clusters that lead to correlations and whether these follow the same representation as the response similarity. We found that similarity of responses is dependent on the neurons spatial distance for similarly and differently frequency-tuned neurons, and that similarity decreases gradually with spatial distance. Significant neural correlations exist, and contribute to the response similarity. Our findings suggest that for multi-unit clusters in the mammalian inferior colliculus, the gradual response similarity with spatial distance to natural complex sounds is shaped by neural interactions and the gradual organization of neural preferences.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا