ترغب بنشر مسار تعليمي؟ اضغط هنا

Integrated Information in the Spiking-Bursting Stochastic Model

61   0   0.0 ( 0 )
 نشر من قبل Oleg Kanakov
 تاريخ النشر 2019
  مجال البحث علم الأحياء
والبحث باللغة English




اسأل ChatGPT حول البحث

This study presents a comprehensive analytic description in terms of the empirical whole minus sum version of Integrated Information in comparison to the decoder based version for the spiking-bursting discrete-time, discrete-state stochastic model, which was recently introduced to describe a specific type of dynamics in a neuron-astrocyte network. The whole minus sum information may change sign, and an interpretation of this transition in terms of net synergy is available in the literature. This motivates our particular interest to the sign of the whole minus sum information in our analytical consideration. The behavior of the whole minus sum and decoder based information measures are found to bear a lot of similarity, showing their mutual asymptotic convergence as time-uncorrelated activity is increased, with the sign transition of the whole minus sum information associated to a rapid growth in the decoder based information. The study aims at creating a theoretical base for using the spiking-bursting model as a well understood reference point for applying Integrated Information concepts to systems exhibiting similar bursting behavior (in particular, to neuron-astrocyte networks). The model can also be of interest as a new discrete-state test bench for different formulations of Integrated Information.



قيم البحث

اقرأ أيضاً

The ability to integrate information in the brain is considered to be an essential property for cognition and consciousness. Integrated Information Theory (IIT) hypothesizes that the amount of integrated information ($Phi$) in the brain is related to the level of consciousness. IIT proposes that to quantify information integration in a system as a whole, integrated information should be measured across the partition of the system at which information loss caused by partitioning is minimized, called the Minimum Information Partition (MIP). The computational cost for exhaustively searching for the MIP grows exponentially with system size, making it difficult to apply IIT to real neural data. It has been previously shown that if a measure of $Phi$ satisfies a mathematical property, submodularity, the MIP can be found in a polynomial order by an optimization algorithm. However, although the first version of $Phi$ is submodular, the lat
Most information dynamics and statistical causal analysis frameworks rely on the common intuition that causal interactions are intrinsically pairwise -- every cause variable has an associated effect variable, so that a causal arrow can be drawn betwe en them. However, analyses that depict interdependencies as directed graphs fail to discriminate the rich variety of modes of information flow that can coexist within a system. This, in turn, creates problems with attempts to operationalise the concepts of dynamical complexity or `integrated information. To address this shortcoming, we combine concepts of partial information decomposition and integrated information, and obtain what we call Integrated Information Decomposition, or $Phi$ID. We show how $Phi$ID paves the way for more detailed analyses of interdependencies in multivariate time series, and sheds light on collective modes of information dynamics that have not been reported before. Additionally, $Phi$ID reveals that what is typically referred to as integration is actually an aggregate of several heterogeneous phenomena. Furthermore, $Phi$ID can be used to formulate new, tailored measures of integrated information, as well as to understand and alleviate the limitations of existing measures.
The Integrated Information is a quantitative measure from information theory how tightly all parts of a system are interconnected in terms of information exchange. In this study we show that astrocyte, playing an important role in regulation of infor mation transmission between neurons, may contribute to a generation of positive Integrated Information in neuronal ensembles. Analytically and numerically we show that the presence of astrocyte may be essential for this information attribute in neuro-astrocytic ensembles. Moreover, the proposed spiking-bursting mechanism of generating positive Integrated Information is shown to be generic and not limited to neuroglial networks, and is given a complete analytic description.
Finite-sized populations of spiking elements are fundamental to brain function, but also used in many areas of physics. Here we present a theory of the dynamics of finite-sized populations of spiking units, based on a quasi-renewal description of neu rons with adaptation. We derive an integral equation with colored noise that governs the stochastic dynamics of the population activity in response to time-dependent stimulation and calculate the spectral density in the asynchronous state. We show that systems of coupled populations with adaptation can generate a frequency band in which sensory information is preferentially encoded. The theory is applicable to fully as well as randomly connected networks, and to leaky integrate-and-fire as well as to generalized spiking neurons with adaptation on multiple time scales.
341 - Carlotta Langer , Nihat Ay 2021
The Integrated Information Theory provides a quantitative approach to consciousness and can be applied to neural networks. An embodied agent controlled by such a network influences and is being influenced by its environment. This involves, on the one hand, morphological computation within goal directed action and, on the other hand, integrated information within the controller, the agents brain. In this article, we combine different methods in order to examine the information flows among and within the body, the brain and the environment of an agent. This allows us to relate various information flows to each other. We test this framework in a simple experimental setup. There, we calculate the optimal policy for goal-directed behavior based on the planning as inference method, in which the information-geometric em-algorithm is used to optimize the likelihood of the goal. Morphological computation and integrated information are then calculated with respect to the optimal policies. Comparing the dynamics of these measures under changing morphological circumstances highlights the antagonistic relationship between these two concepts. The more morphological computation is involved, the less information integration within the brain is required. In order to determine the influence of the brain on the behavior of the agent it is necessary to additionally measure the information flow to and from the brain.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا