ترغب بنشر مسار تعليمي؟ اضغط هنا

The principles of adaptation in organisms and machines II: Thermodynamics of the Bayesian brain

251   0   0.0 ( 0 )
 نشر من قبل Hideaki Shimazaki
 تاريخ النشر 2020
والبحث باللغة English
 تأليف Hideaki Shimazaki




اسأل ChatGPT حول البحث

This article reviews how organisms learn and recognize the world through the dynamics of neural networks from the perspective of Bayesian inference, and introduces a view on how such dynamics is described by the laws for the entropy of neural activity, a paradigm that we call thermodynamics of the Bayesian brain. The Bayesian brain hypothesis sees the stimulus-evoked activity of neurons as an act of constructing the Bayesian posterior distribution based on the generative model of the external world that an organism possesses. A closer look at the stimulus-evoked activity at early sensory cortices reveals that feedforward connections initially mediate the stimulus-response, which is later modulated by input from recurrent connections. Importantly, not the initial response, but the delayed modulation expresses animals cognitive states such as awareness and attention regarding the stimulus. Using a simple generative model made of a spiking neural population, we reproduce the stimulus-evoked dynamics with the delayed feedback modulation as the process of the Bayesian inference that integrates the stimulus evidence and a prior knowledge with time-delay. We then introduce a thermodynamic view on this process based on the laws for the entropy of neural activity. This view elucidates that the process of the Bayesian inference works as the recently-proposed information-theoretic engine (neural engine, an analogue of a heat engine in thermodynamics), which allows us to quantify the perceptual capacity expressed in the delayed modulation in terms of entropy.



قيم البحث

اقرأ أيضاً

144 - Hideaki Shimazaki 2019
How do organisms recognize their environment by acquiring knowledge about the world, and what actions do they take based on this knowledge? This article examines hypotheses about organisms adaptation to the environment from machine learning, informat ion-theoretic, and thermodynamic perspectives. We start with constructing a hierarchical model of the world as an internal model in the brain, and review standard machine learning methods to infer causes by approximately learning the model under the maximum likelihood principle. This in turn provides an overview of the free energy principle for an organism, a hypothesis to explain perception and action from the principle of least surprise. Treating this statistical learning as communication between the world and brain, learning is interpreted as a process to maximize information about the world. We investigate how the classical theories of perception such as the infomax principle relates to learning the hierarchical model. We then present an approach to the recognition and learning based on thermodynamics, showing that adaptation by causal learning results in the second law of thermodynamics whereas inference dynamics that fuses observation with prior knowledge forms a thermodynamic process. These provide a unified view on the adaptation of organisms to the environment.
The subject-verb-object (SVO) word order prevalent in English is shared by about $42%$ of world languages. Another $45%$ of all languages follow the SOV order, $9%$ the VSO order, and fewer languages use the three remaining permutations. None of the many extant explanations of this phenomenon take into account the difficulty of implementing these permutations in the brain. We propose a plausible model of sentence generation inspired by the recently proposed Assembly Calculus framework of brain function. Our model results in a natural explanation of the uneven frequencies. Estimating the parameters of this model yields predictions of the relative difficulty of dis-inhibiting one brain area from another. Our model is based on the standard syntax tree, a simple binary tree with three leaves. Each leaf corresponds to one of the three parts of a basic sentence. The leaves can be activated through lock and unlock operations and the sequence of activation of the leaves implements a specific word order. More generally, we also formulate and algorithmically solve the problems of implementing a permutation of the leaves of any binary tree, and of selecting the permutation that is easiest to implement on a given binary tree.
The goal of the present study is to identify autism using machine learning techniques and resting-state brain imaging data, leveraging the temporal variability of the functional connections (FC) as the only information. We estimated and compared the FC variability across brain regions between typical, healthy subjects and autistic population by analyzing brain imaging data from a world-wide multi-site database known as ABIDE (Autism Brain Imaging Data Exchange). Our analysis revealed that patients diagnosed with autism spectrum disorder (ASD) show increased FC variability in several brain regions that are associated with low FC variability in the typical brain. We then used the enhanced FC variability of brain regions as features for training machine learning models for ASD classification and achieved 65% accuracy in identification of ASD versus control subjects within the dataset. We also used node strength estimated from number of functional connections per node averaged over the whole scan as features for ASD classification.The results reveal that the dynamic FC measures outperform or are comparable with the static FC measures in predicting ASD.
151 - Pascal Grange 2018
The wiring diagram of the mouse brain has recently been mapped at a mesoscopic scale in the Allen Mouse Brain Connectivity Atlas. Axonal projections from brain regions were traced using green fluoresent proteins. The resulting data were registered to a common three-dimensional reference space. They yielded a matrix of connection strengths between 213 brain regions. Global features such as closed loops formed by connections of similar intensity can be inferred using tools from persistent homology. We map the wiring diagram of the mouse brain to a simplicial complex (filtered by connection strengths). We work out generators of the first homology group. Some regions, including nucleus accumbens, are connected to the entire brain by loops, whereas no region has non-zero connection strength to all brain regions. Thousands of loops go through the isocortex, the striatum and the thalamus. On the other hand, medulla is the only major brain compartment that contains more than 100 loops.
Given that many fundamental questions in neuroscience are still open, it seems pertinent to explore whether the brain might use other physical modalities than the ones that have been discovered so far. In particular it is well established that neuron s can emit photons, which prompts the question whether these biophotons could serve as signals between neurons, in addition to the well-known electro-chemical signals. For such communication to be targeted, the photons would need to travel in waveguides. Here we show, based on detailed theoretical modeling, that myelinated axons could serve as photonic waveguides, taking into account realistic optical imperfections. We propose experiments, both textit{in vivo} and textit{in vitro}, to test our hypothesis. We discuss the implications of our results, including the question whether photons could mediate long-range quantum entanglement in the brain.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا