Do you want to publish a course? Click here

Models of Innate Neural Attractors and Their Applications for Neural Information Processing

47   0   0.0 ( 0 )
 Publication date 2015
  fields Biology
and research's language is English




Ask ChatGPT about the research

In this work we reveal and explore a new class of attractor neural networks, based on inborn connections provided by model molecular markers, the molecular marker based attractor neural networks (MMBANN). We have explored conditions for the existence of attractor states, critical relations between their parameters and the spectrum of single neuron models, which can implement the MMBANN. Besides, we describe functional models (perceptron and SOM) which obtain significant advantages, while using MMBANN. In particular, the perceptron based on MMBANN, gets specificity gain in orders of error probabilities values, MMBANN SOM obtains real neurophysiological meaning, the number of possible grandma cells increases 1000- fold with MMBANN. Each set of markers has a metric, which is used to make connections between neurons containing the markers. The resulting neural networks have sets of attractor states, which can serve as finite grids for representation of variables in computations. These grids may show dimensions of d = 0, 1, 2,... We work with static and dynamic attractor neural networks of dimensions d = 0 and d = 1. We also argue that the number of dimensions which can be represented by attractors of activities of neural networks with the number of elements N=104 does not exceed 8.

rate research

Read More

The theory of communication through coherence (CTC) proposes that brain oscillations reflect changes in the excitability of neurons, and therefore the successful communication between two oscillating neural populations depends not only on the strength of the signal emitted but also on the relative phases between them. More precisely, effective communication occurs when the emitting and receiving populations are properly phase locked so the inputs sent by the emitting population arrive at the phases of maximal excitability of the receiving population. To study this setting, we consider a population rate model consisting of excitatory and inhibitory cells modelling the receiving population, and we perturb it with a time-dependent periodic function modelling the input from the emitting population. We consider the stroboscopic map for this system and compute numerically the fixed and periodic points of this map and their bifurcations as the amplitude and the frequency of the perturbation are varied. From the bifurcation diagram, we identify the phase-locked states as well as different regions of bistability. We explore carefully the dynamics emphasizing its implications for the CTC theory. In particular, we study how the input gain depends on the timing between the input and the inhibitory action of the receiving population. Our results show that naturally an optimal phase locking for CTC emerges, and provide a mechanism by which the receiving population can implement selective communication. Moreover, the presence of bistable regions, suggests a mechanism by which different communication regimes between brain areas can be established without changing the structure of the network
Periodic neural activity not locked to the stimulus or to motor responses is usually ignored. Here, we present new tools for modeling and quantifying the information transmission based on periodic neural activity that occurs with quasi-random phase relative to the stimulus. We propose a model to reproduce characteristic features of oscillatory spike trains, such as histograms of inter-spike intervals and phase locking of spikes to an oscillatory influence. The proposed model is based on an inhomogeneous Gamma process governed by a density function that is a product of the usual stimulus-dependent rate and a quasi-periodic function. Further, we present an analysis method generalizing the direct method (Rieke et al, 1999; Brenner et al, 2000) to assess the information content in such data. We demonstrate these tools on recordings from relay cells in the lateral geniculate nucleus of the cat.
Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neurons probability of spiking. One popular method, known as maximally informative dimensions (MID), uses an information-theoretic quantity known as single-spike information to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP) model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex.
Rhythmic electrical activity in the brain emerges from regular non-trivial interactions between millions of neurons. Neurons are intricate cellular structures that transmit excitatory (or inhibitory) signals to other neurons, often non-locally, depending on the graded input from other neurons. Often this requires extensive detail to model mathematically, which poses several issues in modelling large systems beyond clusters of neurons, such as the whole brain. Approaching large populations of neurons with interconnected constituent single-neuron models results in an accumulation of exponentially many complexities, rendering a realistic simulation that does not permit mathematical tractability and obfuscates the primary interactions required for emergent electrodynamical patterns in brain rhythms. A statistical mechanics approach with non-local interactions may circumvent these issues while maintaining mathematically tractability. Neural field theory is a population-level approach to modelling large sections of neural tissue based on these principles. Herein we provide a review of key stages of the history and development of neural field theory and contemporary uses of this branch of mathematical neuroscience. We elucidate a mathematical framework in which neural field models can be derived, highlighting the many significant inherited assumptions that exist in the current literature, so that their validity may be considered in light of further developments in both mathematical and experimental neuroscience.
In recent years, artificial neural networks have achieved state-of-the-art performance for predicting the responses of neurons in the visual cortex to natural stimuli. However, they require a time consuming parameter optimization process for accurately modeling the tuning function of newly observed neurons, which prohibits many applications including real-time, closed-loop experiments. We overcome this limitation by formulating the problem as $K$-shot prediction to directly infer a neurons tuning function from a small set of stimulus-response pairs using a Neural Process. This required us to developed a Factorized Neural Process, which embeds the observed set into a latent space partitioned into the receptive field location and the tuning function properties. We show on simulated responses that the predictions and reconstructed receptive fields from the Factorized Neural Process approach ground truth with increasing number of trials. Critically, the latent representation that summarizes the tuning function of a neuron is inferred in a quick, single forward pass through the network. Finally, we validate this approach on real neural data from visual cortex and find that the predictive accuracy is comparable to -- and for small $K$ even greater than -- optimization based approaches, while being substantially faster. We believe this novel deep learning systems identification framework will facilitate better real-time integration of artificial neural network modeling into neuroscience experiments.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا