Do you want to publish a course? Click here

The equivalence of information-theoretic and likelihood-based methods for neural dimensionality reduction

293   0   0.0 ( 0 )
 Added by Ross Williamson
 Publication date 2013
  fields Biology
and research's language is English




Ask ChatGPT about the research

Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neurons probability of spiking. One popular method, known as maximally informative dimensions (MID), uses an information-theoretic quantity known as single-spike information to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP) model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex.



rate research

Read More

The ongoing exponential rise in recording capacity calls for new approaches for analysing and interpreting neural data. Effective dimensionality has emerged as an important property of neural activity across populations of neurons, yet different studies rely on different definitions and interpretations of this quantity. Here we focus on intrinsic and embedding dimensionality, and discuss how they might reveal computational principles from data. Reviewing recent works, we propose that the intrinsic dimensionality reflects information about the latent variables encoded in collective activity, while embedding dimensionality reveals the manner in which this information is processed. We conclude by highlighting the role of network models as an ideal substrate for testing more specifically various hypotheses on the computational principles reflected through intrinsic and embedding dimensionality.
74 - Hideaki Shimazaki 2015
We show that dynamical gain modulation of neurons stimulus response is described as an information-theoretic cycle that generates entropy associated with the stimulus-related activity from entropy produced by the modulation. To articulate this theory, we describe stimulus-evoked activity of a neural population based on the maximum entropy principle with constraints on two types of overlapping activities, one that is controlled by stimulus conditions and the other, termed internal activity, that is regulated internally in an organism. We demonstrate that modulation of the internal activity realises gain control of stimulus response, and controls stimulus information. A cycle of neural dynamics is then introduced to model information processing by the neurons during which the stimulus information is dynamically enhanced by the internal gain-modulation mechanism. Based on the conservation law for entropy production, we demonstrate that the cycle generates entropy ascribed to the stimulus-related activity using entropy supplied by the internal mechanism, analogously to a heat engine that produces work from heat. We provide an efficient cycle that achieves the highest entropic efficiency to retain the stimulus information. The theory allows us to quantify efficiency of the internal computation and its theoretical limit.
In this work we reveal and explore a new class of attractor neural networks, based on inborn connections provided by model molecular markers, the molecular marker based attractor neural networks (MMBANN). We have explored conditions for the existence of attractor states, critical relations between their parameters and the spectrum of single neuron models, which can implement the MMBANN. Besides, we describe functional models (perceptron and SOM) which obtain significant advantages, while using MMBANN. In particular, the perceptron based on MMBANN, gets specificity gain in orders of error probabilities values, MMBANN SOM obtains real neurophysiological meaning, the number of possible grandma cells increases 1000- fold with MMBANN. Each set of markers has a metric, which is used to make connections between neurons containing the markers. The resulting neural networks have sets of attractor states, which can serve as finite grids for representation of variables in computations. These grids may show dimensions of d = 0, 1, 2,... We work with static and dynamic attractor neural networks of dimensions d = 0 and d = 1. We also argue that the number of dimensions which can be represented by attractors of activities of neural networks with the number of elements N=104 does not exceed 8.
Tuffley and Steel (1997) proved that Maximum Likelihood and Maximum Parsimony methods in phylogenetics are equivalent for sequences of characters under a simple symmetric model of substitution with no common mechanism. This result has been widely cited ever since. We show that small changes to the model assumptions suffice to make the two methods inequivalent. In particular, we analyze the case of bounded substitution probabilities as well as the molecular clock assumption. We show that in these cases, even under no common mechanism, Maximum Parsimony and Maximum Likelihood might make conflicting choices. We also show that if there is an upper bound on the substitution probabilities which is `sufficiently small, every Maximum Likelihood tree is also a Maximum Parsimony tree (but not vice versa).
166 - Kevin M. Carter , Raviv Raich , 2008
This report concerns the problem of dimensionality reduction through information geometric methods on statistical manifolds. While there has been considerable work recently presented regarding dimensionality reduction for the purposes of learning tasks such as classification, clustering, and visualization, these methods have focused primarily on Riemannian manifolds in Euclidean space. While sufficient for many applications, there are many high-dimensional signals which have no straightforward and meaningful Euclidean representation. In these cases, signals may be more appropriately represented as a realization of some distribution lying on a statistical manifold, or a manifold of probability density functions (PDFs). We present a framework for dimensionality reduction that uses information geometry for both statistical manifold reconstruction as well as dimensionality reduction in the data domain.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا