Do you want to publish a course? Click here

Large-scale directed network inference with multivariate transfer entropy and hierarchical statistical testing

195   0   0.0 ( 0 )
 Added by Leonardo Novelli
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

Network inference algorithms are valuable tools for the study of large-scale neuroimaging datasets. Multivariate transfer entropy is well suited for this task, being a model-free measure that captures nonlinear and lagged dependencies between time series to infer a minimal directed network model. Greedy algorithms have been proposed to efficiently deal with high-dimensional datasets while avoiding redundant inferences and capturing synergistic effects. However, multiple statistical comparisons may inflate the false positive rate and are computationally demanding, which limited the size of previous validation studies. The algorithm we present---as implemented in the IDTxl open-source software---addresses these challenges by employing hierarchical statistical tests to control the family-wise error rate and to allow for efficient parallelisation. The method was validated on synthetic datasets involving random networks of increasing size (up to 100 nodes), for both linear and nonlinear dynamics. The performance increased with the length of the time series, reaching consistently high precision, recall, and specificity (>98% on average) for 10000 time samples. Varying the statistical significance threshold showed a more favourable precision-recall trade-off for longer time series. Both the network size and the sample size are one order of magnitude larger than previously demonstrated, showing feasibility for typical EEG and MEG experiments.

rate research

Read More

Functional and effective networks inferred from time series are at the core of network neuroscience. Interpreting their properties requires inferred network models to reflect key underlying structural features; however, even a few spurious links can distort network measures, challenging functional connectomes. We study the extent to which micro- and macroscopic properties of underlying networks can be inferred by algorithms based on mutual information and bivariate/multivariate transfer entropy. The validation is performed on two macaque connectomes and on synthetic networks with various topologies (regular lattice, small-world, random, scale-free, modular). Simulations are based on a neural mass model and on autoregressive dynamics (employing Gaussian estimators for direct comparison to functional connectivity and Granger causality). We find that multivariate transfer entropy captures key properties of all networks for longer time series. Bivariate methods can achieve higher recall (sensitivity) for shorter time series but are unable to control false positives (lower specificity) as available data increases. This leads to overestimated clustering, small-world, and rich-club coefficients, underestimated shortest path lengths and hub centrality, and fattened degree distribution tails. Caution should therefore be used when interpreting network properties of functional connectomes obtained via correlation or pairwise statistical dependence measures, rather than more holistic (yet data-hungry) multivariate models.
We study the optimality conditions of information transfer in systems with memory in the low signal-to-noise ratio regime of vanishing input amplitude. We find that the optimal mutual information is represented by a maximum-variance of the signal time course, with correlation structure determined by the Fisher information matrix. We provide illustration of the method on a simple biologically-inspired model of electro-sensory neuron. Our general results apply also to the study of information transfer in single neurons subject to weak stimulation, with implications to the problem of coding efficiency in biological systems.
205 - Lester Ingber 2012
Recent calculations further supports the premise that large-scale synchronous firings of neurons may affect molecular processes. The context is scalp electroencephalography (EEG) during short-term memory (STM) tasks. The mechanism considered is $mathbf{Pi} = mathbf{p} + q mathbf{A}$ (SI units) coupling, where $mathbf{p}$ is the momenta of free $mathrm{Ca}^{2+}$ waves $q$ the charge of $mathrm{Ca}^{2+}$ in units of the electron charge, and $mathbf{A}$ the magnetic vector potential of current $mathbf{I}$ from neuronal minicolumnar firings considered as wires, giving rise to EEG. Data has processed using multiple graphs to identify sections of data to which spline-Laplacian transformations are applied, to fit the statistical mechanics of neocortical interactions (SMNI) model to EEG data, sensitive to synaptic interactions subject to modification by $mathrm{Ca}^{2+}$ waves.
Heavy-tailed distributions naturally occur in many real life problems. Unfortunately, it is typically not possible to compute inference in closed-form in graphical models which involve such heavy-tailed distributions. In this work, we propose a novel simple linear graphical model for independent latent random variables, called linear characteristic model (LCM), defined in the characteristic function domain. Using stable distributions, a heavy-tailed family of distributions which is a generalization of Cauchy, Levy and Gaussian distributions, we show for the first time, how to compute both exact and approximate inference in such a linear multivariate graphical model. LCMs are not limited to stable distributions, in fact LCMs are always defined for any random variables (discrete, continuous or a mixture of both). We provide a realistic problem from the field of computer networks to demonstrate the applicability of our construction. Other potential application is iterative decoding of linear channels with non-Gaussian noise.
We describe a large-scale functional brain model that includes detailed, conductance-based, compartmental models of individual neurons. We call the model BioSpaun, to indicate the increased biological plausibility of these neurons, and because it is a direct extension of the Spaun model cite{Eliasmith2012b}. We demonstrate that including these detailed compartmental models does not adversely affect performance across a variety of tasks, including digit recognition, serial working memory, and counting. We then explore the effects of applying TTX, a sodium channel blocking drug, to the model. We characterize the behavioral changes that result from this molecular level intervention. We believe this is the first demonstration of a large-scale brain model that clearly links low-level molecular interventions and high-level behavior.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا