Testing our conceptual understanding of V1 function


الملخص بالإنكليزية

Here we test our conceptual understanding of V1 function by asking two experimental questions: 1) How do neurons respond to the spatiotemporal structure contained in dynamic, natural scenes? and 2) What is the true range of visual responsiveness and predictability of neural responses obtained in an unbiased sample of neurons across all layers of cortex? We address these questions by recording responses to natural movie stimuli with 32 channel silicon probes. By simultaneously recording from cells in all layers, and taking all recorded cells, we reduce recording bias that results from hunting for neural responses evoked from drifting bars and gratings. A nonparametric model reveals that many cells that are visually responsive do not appear to be captured by standard receptive field models. Using nonlinear Radial Basis Function kernels in a support vector machine, we can explain the responses of some of these cells better than standard linear and phase-invariant complex cell models. This suggests that V1 neurons exhibit more complex and diverse responses than standard models can capture, ranging from simple and complex cells strongly driven by their classical receptive fields, to cells with more nonlinear receptive fields inferred from the nonparametric and RFB model, and cells that are not visually responsive despite robust firing.

تحميل البحث