ترغب بنشر مسار تعليمي؟ اضغط هنا

Neural and phenotypic representation under the free-energy principle

235   0   0.0 ( 0 )
 نشر من قبل Maxwell J. D. Ramstead
 تاريخ النشر 2020
  مجال البحث علم الأحياء
والبحث باللغة English




اسأل ChatGPT حول البحث

The aim of this paper is to leverage the free-energy principle and its corollary process theory, active inference, to develop a generic, generalizable model of the representational capacities of living creatures; that is, a theory of phenotypic representation. Given their ubiquity, we are concerned with distributed forms of representation (e.g., population codes), whereby patterns of ensemble activity in living tissue come to represent the causes of sensory input or data. The active inference framework rests on the Markov blanket formalism, which allows us to partition systems of interest, such as biological systems, into internal states, external states, and the blanket (active and sensory) states that render internal and external states conditionally independent of each other. In this framework, the representational capacity of living creatures emerges as a consequence of their Markovian structure and nonequilibrium dynamics, which together entail a dual-aspect information geometry. This entails a modest representational capacity: internal states have an intrinsic information geometry that describes their trajectory over time in state space, as well as an extrinsic information geometry that allows internal states to encode (the parameters of) probabilistic beliefs about (fictive) external states. Building on this, we describe here how, in an automatic and emergent manner, information about stimuli can come to be encoded by groups of neurons bound by a Markov blanket; what is known as the neuronal packet hypothesis. As a concrete demonstration of this type of emergent representation, we present numerical simulations showing that self-organizing ensembles of active inference agents sharing the right kind of probabilistic generative model are able to encode recoverable information about a stimulus array.



قيم البحث

اقرأ أيضاً

260 - Samuel J. Gershman 2019
The free energy principle has been proposed as a unifying account of brain function. It is closely related, and in some cases subsumes, earlier unifying ideas such as Bayesian inference, predictive coding, and active learning. This article clarifies these connections, teasing apart distinctive and shared predictions.
Neural population activity is theorized to reflect an underlying dynamical structure. This structure can be accurately captured using state space models with explicit dynamics, such as those based on recurrent neural networks (RNNs). However, using r ecurrence to explicitly model dynamics necessitates sequential processing of data, slowing real-time applications such as brain-computer interfaces. Here we introduce the Neural Data Transformer (NDT), a non-recurrent alternative. We test the NDTs ability to capture autonomous dynamical systems by applying it to synthetic datasets with known dynamics and data from monkey motor cortex during a reaching task well-modeled by RNNs. The NDT models these datasets as well as state-of-the-art recurrent models. Further, its non-recurrence enables 3.9ms inference, well within the loop time of real-time applications and more than 6 times faster than recurrent baselines on the monkey reaching dataset. These results suggest that an explicit dynamics model is not necessary to model autonomous neural population dynamics. Code: https://github.com/snel-repo/neural-data-transformers
We investigate the synchronization features of a network of spiking neurons under a distance-dependent coupling following a power-law model. The interplay between topology and coupling strength leads to the existence of different spatiotemporal patte rns, corresponding to either non-synchronized or phase-synchronized states. Particularly interesting is what we call synchronization malleability, in which the system depicts significantly different phase synchronization degrees for the same parameters as a consequence of a different ordering of neural inputs. We analyze the functional connectivity of the network by calculating the mutual information between neuronal spike trains, allowing us to characterize the structures of synchronization in the network. We show that these structures are dependent on the ordering of the inputs for the parameter regions where the network presents synchronization malleability and we suggest that this is due to a complex interplay between coupling, connection architecture, and individual neural inputs.
Electrical stimulation of neural systems is a key tool for understanding neural dynamics and ultimately for developing clinical treatments. Many applications of electrical stimulation affect large populations of neurons. However, computational models of large networks of spiking neurons are inherently hard to simulate and analyze. We evaluate a reduced mean-field model of excitatory and inhibitory adaptive exponential integrate-and-fire (AdEx) neurons which can be used to efficiently study the effects of electrical stimulation on large neural populations. The rich dynamical properties of this basic cortical model are described in detail and validated using large network simulations. Bifurcation diagrams reflecting the networks state reveal asynchronous up- and down-states, bistable regimes, and oscillatory regions corresponding to fast excitation-inhibition and slow excitation-adaptation feedback loops. The biophysical parameters of the AdEx neuron can be coupled to an electric field with realistic field strengths which then can be propagated up to the population description.We show how on the edge of bifurcation, direct electrical inputs cause network state transitions, such as turning on and off oscillations of the population rate. Oscillatory input can frequency-entrain and phase-lock endogenous oscillations. Relatively weak electric field strengths on the order of 1 V/m are able to produce these effects, indicating that field effects are strongly amplified in the network. The effects of time-varying external stimulation are well-predicted by the mean-field model, further underpinning the utility of low-dimensional neural mass models.
The recent success of brain-inspired deep neural networks (DNNs) in solving complex, high-level visual tasks has led to rising expectations for their potential to match the human visual system. However, DNNs exhibit idiosyncrasies that suggest their visual representation and processing might be substantially different from human vision. One limitation of DNNs is that they are vulnerable to adversarial examples, input images on which subtle, carefully designed noises are added to fool a machine classifier. The robustness of the human visual system against adversarial examples is potentially of great importance as it could uncover a key mechanistic feature that machine vision is yet to incorporate. In this study, we compare the visual representations of white- and black-box adversarial examples in DNNs and humans by leveraging functional magnetic resonance imaging (fMRI). We find a small but significant difference in representation patterns for different (i.e. white- versus black- box) types of adversarial examples for both humans and DNNs. However, human performance on categorical judgment is not degraded by noise regardless of the type unlike DNN. These results suggest that adversarial examples may be differentially represented in the human visual system, but unable to affect the perceptual experience.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا