ترغب بنشر مسار تعليمي؟ اضغط هنا

Intrinsic gain modulation and adaptive neural coding

155   0   0.0 ( 0 )
 نشر من قبل Sungho Hong
 تاريخ النشر 2008
  مجال البحث علم الأحياء فيزياء
والبحث باللغة English
 تأليف Sungho Hong




اسأل ChatGPT حول البحث

In many cases, the computation of a neural system can be reduced to a receptive field, or a set of linear filters, and a thresholding function, or gain curve, which determines the firing probability; this is known as a linear/nonlinear model. In some forms of sensory adaptation, these linear filters and gain curve adjust very rapidly to changes in the variance of a randomly varying driving input. An apparently similar but previously unrelated issue is the observation of gain control by background noise in cortical neurons: the slope of the firing rate vs current (f-I) curve changes with the variance of background random input. Here, we show a direct correspondence between these two observations by relating variance-dependent changes in the gain of f-I curves to characteristics of the changing empirical linear/nonlinear model obtained by sampling. In the case that the underlying system is fixed, we derive relationships relating the change of the gain with respect to both mean and variance with the receptive fields derived from reverse correlation on a white noise stimulus. Using two conductance-based model neurons that display distinct gain modulation properties through a simple change in parameters, we show that coding properties of both these models quantitatively satisfy the predicted relationships. Our results describe how both variance-dependent gain modulation and adaptive neural computation result from intrinsic nonlinearity.



قيم البحث

اقرأ أيضاً

Recordings from area V4 of monkeys have revealed that when the focus of attention is on a visual stimulus within the receptive field of a cortical neuron, two distinct changes can occur: The firing rate of the neuron can change and there can be an in crease in the coherence between spikes and the local field potential in the gamma-frequency range (30-50 Hz). The hypothesis explored here is that these observed effects of attention could be a consequence of changes in the synchrony of local interneuron networks. We performed computer simulations of a Hodgkin-Huxley type neuron driven by a constant depolarizing current, I, representing visual stimulation and a modulatory inhibitory input representing the effects of attention via local interneuron networks. We observed that the neurons firing rate and the coherence of its output spike train with the synaptic inputs was modulated by the degree of synchrony of the inhibitory inputs. The model suggest that the observed changes in firing rate and coherence of neurons in the visual cortex could be controlled by top-down inputs that regulated the coherence in the activity of a local inhibitory network discharging at gamma frequencies.
The ongoing exponential rise in recording capacity calls for new approaches for analysing and interpreting neural data. Effective dimensionality has emerged as an important property of neural activity across populations of neurons, yet different stud ies rely on different definitions and interpretations of this quantity. Here we focus on intrinsic and embedding dimensionality, and discuss how they might reveal computational principles from data. Reviewing recent works, we propose that the intrinsic dimensionality reflects information about the latent variables encoded in collective activity, while embedding dimensionality reveals the manner in which this information is processed. We conclude by highlighting the role of network models as an ideal substrate for testing more specifically various hypotheses on the computational principles reflected through intrinsic and embedding dimensionality.
Conventionally, information is represented by spike rates in the neural system. Here, we consider the ability of temporally modulated activities in neuronal networks to carry information extra to spike rates. These temporal modulations, commonly know n as population spikes, are due to the presence of synaptic depression in a neuronal network model. We discuss its relevance to an experiment on transparent motions in macaque monkeys by Treue et al. in 2000. They found that if the moving directions of objects are too close, the firing rate profile will be very similar to that with one direction. As the difference in the moving directions of objects is large enough, the neuronal system would respond in such a way that the network enhances the resolution in the moving directions of the objects. In this paper, we propose that this behavior can be reproduced by neural networks with dynamical synapses when there are multiple external inputs. We will demonstrate how resolution enhancement can be achieved, and discuss the conditions under which temporally modulated activities are able to enhance information processing performances in general.
Neural populations encode information about their stimulus in a collective fashion, by joint activity patterns of spiking and silence. A full account of this mapping from stimulus to neural activity is given by the conditional probability distributio n over neural codewords given the sensory input. To be able to infer a model for this distribution from large-scale neural recordings, we introduce a stimulus-dependent maximum entropy (SDME) model---a minimal extension of the canonical linear-nonlinear model of a single neuron, to a pairwise-coupled neural population. The model is able to capture the single-cell response properties as well as the correlations in neural spiking due to shared stimulus and due to effective neuron-to-neuron connections. Here we show that in a population of 100 retinal ganglion cells in the salamander retina responding to temporal white-noise stimuli, dependencies between cells play an important encoding role. As a result, the SDME model gives a more accurate account of single cell responses and in particular outperforms uncoupled models in reproducing the distributions of codewords emitted in response to a stimulus. We show how the SDME model, in conjunction with static maximum entropy models of population vocabulary, can be used to estimate information-theoretic quantities like surprise and information transmission in a neural population.
We investigate the synchronization features of a network of spiking neurons under a distance-dependent coupling following a power-law model. The interplay between topology and coupling strength leads to the existence of different spatiotemporal patte rns, corresponding to either non-synchronized or phase-synchronized states. Particularly interesting is what we call synchronization malleability, in which the system depicts significantly different phase synchronization degrees for the same parameters as a consequence of a different ordering of neural inputs. We analyze the functional connectivity of the network by calculating the mutual information between neuronal spike trains, allowing us to characterize the structures of synchronization in the network. We show that these structures are dependent on the ordering of the inputs for the parameter regions where the network presents synchronization malleability and we suggest that this is due to a complex interplay between coupling, connection architecture, and individual neural inputs.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا