Do you want to publish a course? Click here

Information Flow in Computational Systems

280   0   0.0 ( 0 )
 Added by Praveen Venkatesh
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

We develop a theoretical framework for defining and identifying flows of information in computational systems. Here, a computational system is assumed to be a directed graph, with clocked nodes that send transmissions to each other along the edges of the graph at discrete points in time. We are interested in a definition that captures the dynamic flow of information about a specific message, and which guarantees an unbroken information path between appropriately defined inputs and outputs in the directed graph. Prior measures, including those based on Granger Causality and Directed Information, fail to provide clear assumptions and guarantees about when they correctly reflect information flow about a message. We take a systematic approach---iterating through candidate definitions and counterexamples---to arrive at a definition for information flow that is based on conditional mutual information, and which satisfies desirable properties, including the existence of information paths. Finally, we describe how information flow might be detected in a noiseless setting, and provide an algorithm to identify information paths on the time-unrolled graph of a computational system.



rate research

Read More

Biological and artificial neural systems are composed of many local processors, and their capabilities depend upon the transfer function that relates each local processors outputs to its inputs. This paper uses a recent advance in the foundations of information theory to study the properties of local processors that use contextual input to amplify or attenuate transmission of information about their driving inputs. This advance enables the information transmitted by processors with two distinct inputs to be decomposed into those components unique to each input, that shared between the two inputs, and that which depends on both though it is in neither, i.e. synergy. The decompositions that we report here show that contextual modulation has information processing properties that contrast with those of all four simple arithmetic operators, that it can take various forms, and that the form used in our previous studies of artificial neural nets composed of local processors with both driving and contextual inputs is particularly well-suited to provide the distinctive capabilities of contextual modulation under a wide range of conditions. We argue that the decompositions reported here could be compared with those obtained from empirical neurobiological and psychophysical data under conditions thought to reflect contextual modulation. That would then shed new light on the underlying processes involved. Finally, we suggest that such decompositions could aid the design of context-sensitive machine learning algorithms.
This article introduces a model-agnostic approach to study statistical synergy, a form of emergence in which patterns at large scales are not traceable from lower scales. Our framework leverages various multivariate extensions of Shannons mutual information, and introduces the O-information as a metric capable of characterising synergy- and redundancy-dominated systems. We develop key analytical properties of the O-information, and study how it relates to other metrics of high-order interactions from the statistical mechanics and neuroscience literature. Finally, as a proof of concept, we use the proposed framework to explore the relevance of statistical synergy in Baroque music scores.
A new computational private information retrieval (PIR) scheme based on random linear codes is presented. A matrix of messages from a McEliece scheme is used to query the server with carefully chosen errors. The server responds with the sum of the scalar multiple of the rows of the query matrix and the files. The user recovers the desired file by erasure decoding the response. Contrary to code-based cryptographic systems, the scheme presented here enables to use truly random codes, not only codes disguised as such. Further, we show the relation to the so-called error subspace search problem and quotient error search problem, which we assume to be difficult, and show that the scheme is secure against attacks based on solving these problems.
This paper introduces several fundamental concepts in information theory from the perspective of their origins in engineering. Understanding such concepts is important in neuroscience for two reasons. Simply applying formulae from information theory without understanding the assumptions behind their definitions can lead to erroneous results and conclusions. Furthermore, this century will see a convergence of information theory and neuroscience; information theory will expand its foundations to incorporate more comprehensively biological processes thereby helping reveal how neuronal networks achieve their remarkable information processing abilities.
Information theory and the framework of information dynamics have been used to provide tools to characterise complex systems. In particular, we are interested in quantifying information storage, information modification and information transfer as characteristic elements of computation. Although these quantities are defined for autonomous dynamical systems, information dynamics can also help to get a wholistic understanding of input-driven systems such as neural networks. In this case, we do not distinguish between the system itself, and the effects the input has to the system. This may be desired in some cases, but it will change the questions we are able to answer, and is consequently an important consideration, for example, for biological systems which perform non-trivial computations and also retain a short-term memory of past inputs. Many other real world systems like cortical networks are also heavily input-driven, and application of tools designed for autonomous dynamic systems may not necessarily lead to intuitively interpretable results. The aim of our work is to extend the measurements used in the information dynamics framework for input-driven systems. Using the proposed input-corrected information storage we hope to better quantify system behaviour, which will be important for heavily input-driven systems like artificial neural networks to abstract from specific benchmarks, or for brain networks, where intervention is difficult, individual components cannot be tested in isolation or with arbitrary input data.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا