ترغب بنشر مسار تعليمي؟ اضغط هنا

Entropic Causal Inference for Neurological Applications

84   0   0.0 ( 0 )
 نشر من قبل Jeremie Fish
 تاريخ النشر 2020
  مجال البحث علم الأحياء
والبحث باللغة English




اسأل ChatGPT حول البحث

The ultimate goal of cognitive neuroscience is to understand the mechanistic neural processes underlying the functional organization of the brain. Key to this study is understanding structure of both the structural and functional connectivity between anatomical regions. In this paper we follow previous work in developing a simple dynamical model of the brain by simulating its various regions as Kuramoto oscillators whose coupling structure is described by a complex network. However in our simulations rather than generating synthetic networks, we simulate our synthetic model but coupled by a real network of the anatomical brain regions which has been reconstructed from diffusion tensor imaging (DTI) data. By using an information theoretic approach that defines direct information flow in terms of causation entropy (CSE), we show that we can more accurately recover the true structural network than either of the popular correlation or LASSO regression techniques. We demonstrate the effectiveness of our method when applied to data simulated on the realistic DTI network, as well as on randomly generated small-world and Erdos-Renyi (ER) networks.

قيم البحث

اقرأ أيضاً

We consider the problem of identifying the causal direction between two discrete random variables using observational data. Unlike previous work, we keep the most general functional model but make an assumption on the unobserved exogenous variable: I nspired by Occams razor, we assume that the exogenous variable is simple in the true causal direction. We quantify simplicity using Renyi entropy. Our main result is that, under natural assumptions, if the exogenous variable has low $H_0$ entropy (cardinality) in the true direction, it must have high $H_0$ entropy in the wrong direction. We establish several algorithmic hardness results about estimating the minimum entropy exogenous variable. We show that the problem of finding the exogenous variable with minimum entropy is equivalent to the problem of finding minimum joint entropy given $n$ marginal distributions, also known as minimum entropy coupling problem. We propose an efficient greedy algorithm for the minimum entropy coupling problem, that for $n=2$ provably finds a local optimum. This gives a greedy algorithm for finding the exogenous variable with minimum $H_1$ (Shannon Entropy). Our greedy entropy-based causal inference algorithm has similar performance to the state of the art additive noise models in real datasets. One advantage of our approach is that we make no use of the values of random variables but only their distributions. Our method can therefore be used for causal inference for both ordinal and also categorical data, unlike additive noise models.
As quantum computing and networking nodes scale-up, important open questions arise on the causal influence of various sub-systems on the total system performance. These questions are related to the tomographic reconstruction of the macroscopic wavefu nction and optimizing connectivity of large engineered qubit systems, the reliable broadcasting of information across quantum networks as well as speed-up of classical causal inference algorithms on quantum computers. A direct generalization of the existing causal inference techniques to the quantum domain is not possible due to superposition and entanglement. We put forth a new theoretical framework for merging quantum information science and causal inference by exploiting entropic principles. First, we build the fundamental connection between the celebrated quantum marginal problem and entropic causal inference. Second, inspired by the definition of geometric quantum discord, we fill the gap between classical conditional probabilities and quantum conditional density matrices. These fundamental theoretical advances are exploited to develop a scalable algorithmic approach for quantum entropic causal inference. We apply our proposed framework to an experimentally relevant scenario of identifying message senders on quantum noisy links. This successful inference on a synthetic quantum dataset can lay the foundations of identifying originators of malicious activity on future multi-node quantum networks. We unify classical and quantum causal inference in a principled way paving the way for future applications in quantum computing and networking.
In artificial neural networks trained with gradient descent, the weights used for processing stimuli are also used during backward passes to calculate gradients. For the real brain to approximate gradients, gradient information would have to be propa gated separately, such that one set of synaptic weights is used for processing and another set is used for backward passes. This produces the so-called weight transport problem for biological models of learning, where the backward weights used to calculate gradients need to mirror the forward weights used to process stimuli. This weight transport problem has been considered so hard that popular proposals for biological learning assume that the backward weights are simply random, as in the feedback alignment algorithm. However, such random weights do not appear to work well for large networks. Here we show how the discontinuity introduced in a spiking system can lead to a solution to this problem. The resulting algorithm is a special case of an estimator used for causal inference in econometrics, regression discontinuity design. We show empirically that this algorithm rapidly makes the backward weights approximate the forward weights. As the backward weights become correct, this improves learning performance over feedback alignment on tasks such as Fashion-MNIST, SVHN, CIFAR-10 and VOC. Our results demonstrate that a simple learning rule in a spiking network can allow neurons to produce the right backward connections and thus solve the weight transport problem.
COVID-19 infections have well described systemic manifestations, especially respiratory problems. There are currently no specific treatments or vaccines against the current strain. With higher case numbers, a range of neurological symptoms are becomi ng apparent. The mechanisms responsible for these are not well defined, other than those related to hypoxia and microthrombi. We speculate that sustained systemic immune activation seen with SARS-CoV-2 may also cause secondary autoimmune activation in the CNS. Patients with chronic neurological diseases may be at higher risk because of chronic secondary respiratory disease and potentially poor nutritional status. Here, we review the impact of COVID-19 on people with chronic neurological diseases and potential mechanisms. We believe special attention to protecting people with neurodegenerative disease is warranted. We are concerned about a possible delayed pandemic in the form of an increased burden of neurodegenerative disease after acceleration of pathology by systemic COVID-19 infections.
We study the problem of discovering the simplest latent variable that can make two observed discrete variables conditionally independent. The minimum entropy required for such a latent is known as common entropy in information theory. We extend this notion to Renyi common entropy by minimizing the Renyi entropy of the latent variable. To efficiently compute common entropy, we propose an iterative algorithm that can be used to discover the trade-off between the entropy of the latent variable and the conditional mutual information of the observed variables. We show two applications of common entropy in causal inference: First, under the assumption that there are no low-entropy mediators, it can be used to distinguish causation from spurious correlation among almost all joint distributions on simple causal graphs with two observed variables. Second, common entropy can be used to improve constraint-based methods such as PC or FCI algorithms in the small-sample regime, where these methods are known to struggle. We propose a modification to these constraint-based methods to assess if a separating set found by these algorithms is valid using common entropy. We finally evaluate our algorithms on synthetic and real data to establish their performance.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا