Do you want to publish a course? Click here

Searching for collective behavior in a network of real neurons

301   0   0.0 ( 0 )
 Added by Gasper Tkacik
 Publication date 2013
  fields Biology Physics
and research's language is English




Ask ChatGPT about the research

Maximum entropy models are the least structured probability distributions that exactly reproduce a chosen set of statistics measured in an interacting network. Here we use this principle to construct probabilistic models which describe the correlated spiking activity of populations of up to 120 neurons in the salamander retina as it responds to natural movies. Already in groups as small as 10 neurons, interactions between spikes can no longer be regarded as small perturbations in an otherwise independent system; for 40 or more neurons pairwise interactions need to be supplemented by a global interaction that controls the distribution of synchrony in the population. Here we show that such K-pairwise models--being systematic extensions of the previously used pairwise Ising models--provide an excellent account of the data. We explore the properties of the neural vocabulary by: 1) estimating its entropy, which constrains the populations capacity to represent visual information; 2) classifying activity patterns into a small set of metastable collective modes; 3) showing that the neural codeword ensembles are extremely inhomogenous; 4) demonstrating that the state of individual neurons is highly predictable from the rest of the population, allowing the capacity for error correction.



rate research

Read More

Network neuroscience shed some light on the functional and structural modifications occurring to the brain associated with the phenomenology of schizophrenia. In particular, resting-state functional networks have helped our understanding of the illness by highlighting the global and local alterations within the cerebral organization. We investigated the robustness of the brain functional architecture in forty-four medicated schizophrenic patients and forty healthy comparators through an advanced network analysis of resting-state functional magnetic resonance imaging data. The networks in patients showed more resistance to disconnection than in healthy controls, with an evident discrepancy between the two groups in the node degree distribution computed along a percolation process. Despite a substantial similarity of the basal functional organization between the two groups, the expected hierarchy of healthy brains modular organization is crumbled in schizophrenia, showing a peculiar arrangement of the functional connections, characterized by several topologically equivalent backbones.
Advances in optical neuroimaging techniques now allow neural activity to be recorded with cellular resolution in awake and behaving animals. Brain motion in these recordings pose a unique challenge. The location of individual neurons must be tracked in 3D over time to accurately extract single neuron activity traces. Recordings from small invertebrates like C. elegans are especially challenging because they undergo very large brain motion and deformation during animal movement. Here we present an automated computer vision pipeline to reliably track populations of neurons with single neuron resolution in the brain of a freely moving C. elegans undergoing large motion and deformation. 3D volumetric fluorescent images of the animals brain are straightened, aligned and registered, and the locations of neurons in the images are found via segmentation. Each neuron is then assigned an identity using a new time-independent machine-learning approach we call Neuron Registration Vector Encoding. In this approach, non-rigid point-set registration is used to match each segmented neuron in each volume with a set of reference volumes taken from throughout the recording. The way each neuron matches with the references defines a feature vector which is clustered to assign an identity to each neuron in each volume. Finally, thin-plate spline interpolation is used to correct errors in segmentation and check consistency of assigned identities. The Neuron Registration Vector Encoding approach proposed here is uniquely well suited for tracking neurons in brains undergoing large deformations. When applied to whole-brain calcium imaging recordings in freely moving C. elegans, this analysis pipeline located 150 neurons for the duration of an 8 minute recording and consistently found more neurons more quickly than manual or semi-automated approaches.
Local anaxonic neurons with graded potential release are important ingredients of nervous systems, present in the olfactory bulb system of mammalians, in the human visual system, as well as in arthropods and nematodes. We develop a neuronal network model including both axonic and anaxonic neurons and monitor the activity tuned by the following parameters: The decay length of the graded potential in local neurons, the fraction of local neurons, the largest eigenvalue of the adjacency matrix and the range of connections of the local neurons. Tuning the fraction of local neurons, we derive the phase diagram including two transition lines: A critical line separating subcritical and supercritical regions, characterized by power law distributions of avalanche sizes and durations, and a bifurcation line. We find that the overall behavior of the system is controlled by a parameter tuning the relevance of local neuron transmission with respect to the axonal one. The statistical properties of spontaneous activity are affected by local neurons at large fractions and in the condition that the graded potential transmission dominates the axonal one. In this case the scaling properties of spontaneous activity exhibit continuously varying exponents, rather than the mean field branching model universality class.
Correlations in sensory neural networks have both extrinsic and intrinsic origins. Extrinsic or stimulus correlations arise from shared inputs to the network, and thus depend strongly on the stimulus ensemble. Intrinsic or noise correlations reflect biophysical mechanisms of interactions between neurons, which are expected to be robust to changes of the stimulus ensemble. Despite the importance of this distinction for understanding how sensory networks encode information collectively, no method exists to reliably separate intrinsic interactions from extrinsic correlations in neural activity data, limiting our ability to build predictive models of the network response. In this paper we introduce a general strategy to infer {population models of interacting neurons that collectively encode stimulus information}. The key to disentangling intrinsic from extrinsic correlations is to infer the {couplings between neurons} separately from the encoding model, and to combine the two using corrections calculated in a mean-field approximation. We demonstrate the effectiveness of this approach on retinal recordings. The same coupling network is inferred from responses to radically different stimulus ensembles, showing that these couplings indeed reflect stimulus-independent interactions between neurons. The inferred model predicts accurately the collective response of retinal ganglion cell populations as a function of the stimulus.
96 - H. Sebastian Seung 2018
A companion paper introduces a nonlinear network with Hebbian excitatory (E) neurons that are reciprocally coupled with anti-Hebbian inhibitory (I) neurons and also receive Hebbian feedforward excitation from sensory (S) afferents. The present paper derives the network from two normative principles that are mathematically equivalent but conceptually different. The first principle formulates unsupervised learning as a constrained optimization problem: maximization of S-E correlations subject to a copositivity constraint on E-E correlations. A combination of Legendre and Lagrangian duality yields a zero-sum continuous game between excitatory and inhibitory connections that is solved by the neural network. The second principle defines a zero-sum game between E and I cells. E cells want to maximize S-E correlations and minimize E-I correlations, while I cells want to maximize I-E correlations and minimize power. The conflict between I and E objectives effectively forces the E cells to decorrelate from each other, although only incompletely. Legendre duality yields the neural network.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا