ﻻ يوجد ملخص باللغة العربية
We develop a theoretical framework for defining and identifying flows of information in computational systems. Here, a computational system is assumed to be a directed graph, with clocked nodes that send transmissions to each other along the edges of the graph at discrete points in time. We are interested in a definition that captures the dynamic flow of information about a specific message, and which guarantees an unbroken information path between appropriately defined inputs and outputs in the directed graph. Prior measures, including those based on Granger Causality and Directed Information, fail to provide clear assumptions and guarantees about when they correctly reflect information flow about a message. We take a systematic approach---iterating through candidate definitions and counterexamples---to arrive at a definition for information flow that is based on conditional mutual information, and which satisfies desirable properties, including the existence of information paths. Finally, we describe how information flow might be detected in a noiseless setting, and provide an algorithm to identify information paths on the time-unrolled graph of a computational system.
Biological and artificial neural systems are composed of many local processors, and their capabilities depend upon the transfer function that relates each local processors outputs to its inputs. This paper uses a recent advance in the foundations of
This article introduces a model-agnostic approach to study statistical synergy, a form of emergence in which patterns at large scales are not traceable from lower scales. Our framework leverages various multivariate extensions of Shannons mutual info
A new computational private information retrieval (PIR) scheme based on random linear codes is presented. A matrix of messages from a McEliece scheme is used to query the server with carefully chosen errors. The server responds with the sum of the sc
This paper introduces several fundamental concepts in information theory from the perspective of their origins in engineering. Understanding such concepts is important in neuroscience for two reasons. Simply applying formulae from information theory
Information theory and the framework of information dynamics have been used to provide tools to characterise complex systems. In particular, we are interested in quantifying information storage, information modification and information transfer as ch