Do you want to publish a course? Click here

Separating intrinsic interactions from extrinsic correlations in a network of sensory neurons

168   0   0.0 ( 0 )
 Added by Ulisse Ferrari
 Publication date 2018
  fields Biology Physics
and research's language is English




Ask ChatGPT about the research

Correlations in sensory neural networks have both extrinsic and intrinsic origins. Extrinsic or stimulus correlations arise from shared inputs to the network, and thus depend strongly on the stimulus ensemble. Intrinsic or noise correlations reflect biophysical mechanisms of interactions between neurons, which are expected to be robust to changes of the stimulus ensemble. Despite the importance of this distinction for understanding how sensory networks encode information collectively, no method exists to reliably separate intrinsic interactions from extrinsic correlations in neural activity data, limiting our ability to build predictive models of the network response. In this paper we introduce a general strategy to infer {population models of interacting neurons that collectively encode stimulus information}. The key to disentangling intrinsic from extrinsic correlations is to infer the {couplings between neurons} separately from the encoding model, and to combine the two using corrections calculated in a mean-field approximation. We demonstrate the effectiveness of this approach on retinal recordings. The same coupling network is inferred from responses to radically different stimulus ensembles, showing that these couplings indeed reflect stimulus-independent interactions between neurons. The inferred model predicts accurately the collective response of retinal ganglion cell populations as a function of the stimulus.



rate research

Read More

Experimental and numerical results suggest that the brain can be viewed as a system acting close to a critical point, as confirmed by scale-free distributions of relevant quantities in a variety of different systems and models. Less attention has received the investigation of the temporal correlation functions in brain activity in different, healthy and pathological, conditions. Here we perform this analysis by means of a model with short and long-term plasticity which implements the novel feature of different recovery rates for excitatory and inhibitory neurons, found experimentally. We evidence the important role played by inhibitory neurons in the supercritical state: We detect an unexpected oscillatory behaviour of the correlation decay, whose frequency depends on the fraction of inhibitory neurons and their connectivity degree. This behaviour can be rationalized by the observation that bursts in activity become more frequent and with a smaller amplitude as inhibition becomes more relevant.
The brain is characterized by a strong heterogeneity of inhibitory neurons. We report that spiking neural networks display a resonance to the heterogeneity of inhibitory neurons, with optimal input/output responsiveness occurring for levels of heterogeneity similar to that found experimentally in cerebral cortex. A heterogeneous mean-field model predicts such optimal responsiveness. Moreover, we show that new dynamical regimes emerge from heterogeneity that were not present in the equivalent homogeneous system, such as sparsely synchronous collective oscillations.
The cerebrospinal fluid (CSF) constitutes an interface through which chemical cues can reach and modulate the activity of neurons located at the epithelial boundary within the entire nervous system. Here, we investigate the role and functional connectivity of a class of GABAergic sensory neurons contacting the CSF in the vertebrate spinal cord and referred to as CSF-cNs. The remote activation of CSF-cNs was shown to trigger delayed slow locomotion in the zebrafish larva, suggesting that these cells modulate components of locomotor central pattern generators (CPGs). Combining anatomy, electrophysiology, and optogenetics in vivo, we show that CSF-cNs form active GABAergic synapses onto V0-v glutamatergic interneurons, an essential component of locomotor CPGs. We confirmed that activating CSF-cNs at rest induced delayed slow locomotion in the fictive preparation. In contrast, the activation of CSF-cNs promptly inhibited ongoing slow locomotion. Moreover, selective activation of rostral CSF-cNs during ongoing activity disrupted rostrocaudal propagation of descending excitation along the spinal cord, indicating that CSF-cNs primarily act at the premotor level. Altogether, our results demonstrate how a spinal GABAergic sensory neuron can tune the excitability of locomotor CPGs in a state-dependent manner by projecting onto essential components of the excitatory premotor pool.
In this study, we analyzed the activity of monkey V1 neurons responding to grating stimuli of different orientations using inference methods for a time-dependent Ising model. The method provides optimal estimation of time-dependent neural interactions with credible intervals according to the sequential Bayes estimation algorithm. Furthermore, it allows us to trace dynamics of macroscopic network properties such as entropy, sparseness, and fluctuation. Here we report that, in all examined stimulus conditions, pairwise interactions contribute to increasing sparseness and fluctuation. We then demonstrate that the orientation of the grating stimulus is in part encoded in the pairwise interactions of the neural populations. These results demonstrate the utility of the state-space Ising model in assessing contributions of neural interactions during stimulus processing.
Maximum entropy models are the least structured probability distributions that exactly reproduce a chosen set of statistics measured in an interacting network. Here we use this principle to construct probabilistic models which describe the correlated spiking activity of populations of up to 120 neurons in the salamander retina as it responds to natural movies. Already in groups as small as 10 neurons, interactions between spikes can no longer be regarded as small perturbations in an otherwise independent system; for 40 or more neurons pairwise interactions need to be supplemented by a global interaction that controls the distribution of synchrony in the population. Here we show that such K-pairwise models--being systematic extensions of the previously used pairwise Ising models--provide an excellent account of the data. We explore the properties of the neural vocabulary by: 1) estimating its entropy, which constrains the populations capacity to represent visual information; 2) classifying activity patterns into a small set of metastable collective modes; 3) showing that the neural codeword ensembles are extremely inhomogenous; 4) demonstrating that the state of individual neurons is highly predictable from the rest of the population, allowing the capacity for error correction.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا