ترغب بنشر مسار تعليمي؟ اضغط هنا

Natural Image Coding in V1: How Much Use is Orientation Selectivity?

155   0   0.0 ( 0 )
 نشر من قبل Matthias Bethge
 تاريخ النشر 2008
  مجال البحث علم الأحياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Orientation selectivity is the most striking feature of simple cell coding in V1 which has been shown to emerge from the reduction of higher-order correlations in natural images in a large variety of statistical image models. The most parsimonious one among these models is linear Independent Component Analysis (ICA), whereas second-order decorrelation transformations such as Principal Component Analysis (PCA) do not yield oriented filters. Because of this finding it has been suggested that the emergence of orientation selectivity may be explained by higher-order redundancy reduction. In order to assess the tenability of this hypothesis, it is an important empirical question how much more redundancies can be removed with ICA in comparison to PCA, or other second-order decorrelation methods. This question has not yet been settled, as over the last ten years contradicting results have been reported ranging from less than five to more than hundred percent extra gain for ICA. Here, we aim at resolving this conflict by presenting a very careful and comprehensive analysis using three evaluation criteria related to redundancy reduction: In addition to the multi-information and the average log-loss we compute, for the first time, complete rate-distortion curves for ICA in comparison with PCA. Without exception, we find that the advantage of the ICA filters is surprisingly small. Furthermore, we show that a simple spherically symmetric distribution with only two parameters can fit the data even better than the probabilistic model underlying ICA. Since spherically symmetric models are agnostic with respect to the specific filter shapes, we conlude that orientation selectivity is unlikely to play a critical role for redundancy reduction.



قيم البحث

اقرأ أيضاً

Determining how much of the sensory information carried by a neural code contributes to behavioral performance is key to understand sensory function and neural information flow. However, there are as yet no analytical tools to compute this informatio n that lies at the intersection between sensory coding and behavioral readout. Here we develop a novel measure, termed the information-theoretic intersection information $I_{II}(S;R;C)$, that quantifies how much of the sensory information carried by a neural response R is used for behavior during perceptual discrimination tasks. Building on the Partial Information Decomposition framework, we define $I_{II}(S;R;C)$ as the part of the mutual information between the stimulus S and the response R that also informs the consequent behavioral choice C. We compute $I_{II}(S;R;C)$ in the analysis of two experimental cortical datasets, to show how this measure can be used to compare quantitatively the contributions of spike timing and spike rates to task performance, and to identify brain areas or neural populations that specifically transform sensory information into choice.
Astrocytes affect neural transmission by a tight control via glutamate transporters on glutamate concentrations in direct vicinity to the synaptic cleft and by extracellular glutamate. Their relevance for information representation has been supported by in-vivo studies in ferret and mouse primary visual cortex. In ferret blocking glutamate transport pharmacologically broadened tuning curves and enhanced the response at preferred orientation. In knock-out mice with reduced expression of glutamate transporters sharpened tuning was observed. It is however unclear how focal and ambient changes in glutamate concentration affect stimulus representation. Here we develop a computational framework, which allows the investigation of synaptic and extrasynaptic effects of glutamate uptake on orientation tuning in recurrently connected network models with pinwheel-domain (ferret) or salt-and-pepper (mouse) organization. This model proposed that glutamate uptake shapes information representation when it affects the contribution of excitatory and inhibitory neurons to the network activity. Namely, strengthening the contribution of excitatory neurons generally broadens tuning and elevates the response. In contrast, strengthening the contribution of inhibitory neurons can have a sharpening effect on tuning. In addition local representational topology also plays a role: In the pinwheel-domain model effects were strongest within domains - regions where neighboring neurons share preferred orientations. Around pinwheels but also within salt-and-pepper networks the effects were less strong. Our model proposes that the pharmacological intervention in ferret increases the contribution of excitatory cells, while the reduced expression in mouse increases the contribution of inhibitory cells to network activity.
131 - Laurent Perrinet 2009
If modern computers are sometimes superior to humans in some specialized tasks such as playing chess or browsing a large database, they cant beat the efficiency of biological vision for such simple tasks as recognizing and following an object in a co mplex cluttered background. We present in this paper our attempt at outlining the dynamical, parallel and event-based representation for vision in the architecture of the central nervous system. We will illustrate this on static natural images by showing that in a signal matching framework, a L/LN (linear/non-linear) cascade may efficiently transform a sensory signal into a neural spiking signal and we will apply this framework to a model retina. However, this code gets redundant when using an over-complete basis as is necessary for modeling the primary visual cortex: we therefore optimize the efficiency cost by increasing the sparseness of the code. This is implemented by propagating and canceling redundant information using lateral interactions. We compare the efficiency of this representation in terms of compression as the reconstruction quality as a function of the coding length. This will correspond to a modification of the Matching Pursuit algorithm where the ArgMax function is optimized for competition, or Competition Optimized Matching Pursuit (COMP). We will in particular focus on bridging neuroscience and image processing and on the advantages of such an interdisciplinary approach.
In this study, we analyzed the activity of monkey V1 neurons responding to grating stimuli of different orientations using inference methods for a time-dependent Ising model. The method provides optimal estimation of time-dependent neural interaction s with credible intervals according to the sequential Bayes estimation algorithm. Furthermore, it allows us to trace dynamics of macroscopic network properties such as entropy, sparseness, and fluctuation. Here we report that, in all examined stimulus conditions, pairwise interactions contribute to increasing sparseness and fluctuation. We then demonstrate that the orientation of the grating stimulus is in part encoded in the pairwise interactions of the neural populations. These results demonstrate the utility of the state-space Ising model in assessing contributions of neural interactions during stimulus processing.
Here we test our conceptual understanding of V1 function by asking two experimental questions: 1) How do neurons respond to the spatiotemporal structure contained in dynamic, natural scenes? and 2) What is the true range of visual responsiveness and predictability of neural responses obtained in an unbiased sample of neurons across all layers of cortex? We address these questions by recording responses to natural movie stimuli with 32 channel silicon probes. By simultaneously recording from cells in all layers, and taking all recorded cells, we reduce recording bias that results from hunting for neural responses evoked from drifting bars and gratings. A nonparametric model reveals that many cells that are visually responsive do not appear to be captured by standard receptive field models. Using nonlinear Radial Basis Function kernels in a support vector machine, we can explain the responses of some of these cells better than standard linear and phase-invariant complex cell models. This suggests that V1 neurons exhibit more complex and diverse responses than standard models can capture, ranging from simple and complex cells strongly driven by their classical receptive fields, to cells with more nonlinear receptive fields inferred from the nonparametric and RFB model, and cells that are not visually responsive despite robust firing.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا