ترغب بنشر مسار تعليمي؟ اضغط هنا

Can Transfer Entropy Infer Information Flow in Neuronal Circuits for Cognitive Processing?

92   0   0.0 ( 0 )
 نشر من قبل Christoph Adami
 تاريخ النشر 2019
والبحث باللغة English




اسأل ChatGPT حول البحث

To infer information flow in any network of agents, it is important first and foremost to establish causal temporal relations between the nodes. Practical and automated methods that can infer causality are difficult to find, and the subject of ongoing research. While Shannon information only detects correlation, there are several information-theoretic notions of directed information that have successfully detected causality in some systems, in particular in the neuroscience community. However, recent work has shown that some directed information measures can sometimes inadequately estimate the extent of causal relations, or even fail to identify existing cause-effect relations between components of systems, especially if neurons contribute in a cryptographic manner to influence the effector neuron. Here, we test how often cryptographic logic emerges in an evolutionary process that generates artificial neural circuits for two fundamental cognitive tasks: motion detection and sound localization. We also test whether activity time-series recorded from behaving digital brains can infer information flow using the transfer entropy concept, when compared to a ground-truth model of causal influence constructed from connectivity and circuit logic. Our results suggest that transfer entropy will sometimes fail to infer causality when it exists, and sometimes suggest a causal connection when there is none. However, the extent of incorrect inference strongly depends on the cognitive task considered. These results emphasize the importance of understanding the fundamental logic processes that contribute to information flow in cognitive processing, and quantifying their relevance in any given nervous system.



قيم البحث

اقرأ أيضاً

Noise is an inherent part of neuronal dynamics, and thus of the brain. It can be observed in neuronal activity at different spatiotemporal scales, including in neuronal membrane potentials, local field potentials, electroencephalography, and magnetoe ncephalography. A central research topic in contemporary neuroscience is to elucidate the functional role of noise in neuronal information processing. Experimental studies have shown that a suitable level of noise may enhance the detection of weak neuronal signals by means of stochastic resonance. In response, theoretical research, based on the theory of stochastic processes, nonlinear dynamics, and statistical physics, has made great strides in elucidating the mechanism and the many benefits of stochastic resonance in neuronal systems. In this perspective, we review recent research dedicated to neuronal stochastic resonance in biophysical mathematical models. We also explore the regulation of neuronal stochastic resonance, and we outline important open questions and directions for future research. A deeper understanding of neuronal stochastic resonance may afford us new insights into the highly impressive information processing in the brain.
We develop a theoretical framework for defining and identifying flows of information in computational systems. Here, a computational system is assumed to be a directed graph, with clocked nodes that send transmissions to each other along the edges of the graph at discrete points in time. We are interested in a definition that captures the dynamic flow of information about a specific message, and which guarantees an unbroken information path between appropriately defined inputs and outputs in the directed graph. Prior measures, including those based on Granger Causality and Directed Information, fail to provide clear assumptions and guarantees about when they correctly reflect information flow about a message. We take a systematic approach---iterating through candidate definitions and counterexamples---to arrive at a definition for information flow that is based on conditional mutual information, and which satisfies desirable properties, including the existence of information paths. Finally, we describe how information flow might be detected in a noiseless setting, and provide an algorithm to identify information paths on the time-unrolled graph of a computational system.
A striking difference between brain-inspired neuromorphic processors and current von Neumann processors architectures is the way in which memory and processing is organized. As Information and Communication Technologies continue to address the need f or increased computational power through the increase of cores within a digital processor, neuromorphic engineers and scientists can complement this need by building processor architectures where memory is distributed with the processing. In this paper we present a survey of brain-inspired processor architectures that support models of cortical networks and deep neural networks. These architectures range from serial clocked implementations of multi-neuron systems to massively parallel asynchronous ones and from purely digital systems to mixed analog/digital systems which implement more biological-like models of neurons and synapses together with a suite of adaptation and learning mechanisms analogous to the ones found in biological nervous systems. We describe the advantages of the different approaches being pursued and present the challenges that need to be addressed for building artificial neural processing systems that can display the richness of behaviors seen in biological systems.
Biological and artificial neural systems are composed of many local processors, and their capabilities depend upon the transfer function that relates each local processors outputs to its inputs. This paper uses a recent advance in the foundations of information theory to study the properties of local processors that use contextual input to amplify or attenuate transmission of information about their driving inputs. This advance enables the information transmitted by processors with two distinct inputs to be decomposed into those components unique to each input, that shared between the two inputs, and that which depends on both though it is in neither, i.e. synergy. The decompositions that we report here show that contextual modulation has information processing properties that contrast with those of all four simple arithmetic operators, that it can take various forms, and that the form used in our previous studies of artificial neural nets composed of local processors with both driving and contextual inputs is particularly well-suited to provide the distinctive capabilities of contextual modulation under a wide range of conditions. We argue that the decompositions reported here could be compared with those obtained from empirical neurobiological and psychophysical data under conditions thought to reflect contextual modulation. That would then shed new light on the underlying processes involved. Finally, we suggest that such decompositions could aid the design of context-sensitive machine learning algorithms.
Swarm dynamics is the study of collections of agents that interact with one another without central control. In natural systems, insects, birds, fish and other large mammals function in larger units to increase the overall fitness of the individuals. Their behavior is coordinated through local interactions to enhance mate selection, predator detection, migratory route identification and so forth [Andersson and Wallander 2003; Buhl et al. 2006; Nagy et al. 2010; Partridge 1982; Sumpter et al. 2008]. In artificial systems, swarms of autonomous agents can augment human activities such as search and rescue, and environmental monitoring by covering large areas with multiple nodes [Alami et al. 2007; Caruso et al. 2008; Ogren et al. 2004; Paley et al. 2007; Sibley et al. 2002]. In this paper, we explore the interplay between swarm dynamics, covert leadership and theoretical information transfer. A leader is a member of the swarm that acts upon information in addition to what is provided by local interactions. Depending upon the leadership model, leaders can use their external information either all the time or in response to local conditions [Couzin et al. 2005; Sun et al. 2013]. A covert leader is a leader that is treated no differently than others in the swarm, so leaders and followers participate equally in whatever interaction model is used [Rossi et al. 2007]. In this study, we use theoretical information transfer as a means of analyzing swarm interactions to explore whether or not it is possible to distinguish between followers and leaders based on interactions within the swarm. We find that covert leaders can be distinguished from followers in a swarm because they receive less transfer entropy than followers.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا