No Arabic abstract
Technology of data collection and information transmission is based on various mathematical models of encoding. The words Geometry of information refer to such models, whereas the words Moufang patterns refer to various sophisticated symmetries appearing naturally in such models. In this paper we show that the symmetries of spaces of probability distributions, endowed with their canonical Riemannian metric of information geometry, have the structure of a commutative Moufang loop. We also show that the F-manifold structure on the space of probability distribution can be described in terms of differential 3-webs and Malcev algebras. We then present a new construction of (noncommutative) Moufang loops associated to almost-symplectic structures over finite fields, and use then to construct a new class of code loops with associated quantum error-correcting codes and networks of perfect tensors.
In this article, we describe various aspects of categorification of the structures appearing in information theory. These aspects include probabilistic models both of classical and quantum physics, emergence of F-manifolds, and motivic enrichments.
In some communication networks, such as passive RFID systems, the energy used to transfer information between a sender and a recipient can be reused for successive communication tasks. In fact, from known results in physics, any system that exchanges information via the transfer of given physical resources, such as radio waves, particles and qubits, can conceivably reuse, at least part, of the received resources. This paper aims at illustrating some of the new challenges that arise in the design of communication networks in which the signals exchanged by the nodes carry both information and energy. To this end, a baseline two-way communication system is considered in which two nodes communicate in an interactive fashion. In the system, a node can either send an on symbol (or 1), which costs one unit of energy, or an off signal (or 0), which does not require any energy expenditure. Upon reception of a 1 signal, the recipient node harvests, with some probability, the energy contained in the signal and stores it for future communication tasks. Inner and outer bounds on the achievable rates are derived. Numerical results demonstrate the effectiveness of the proposed strategies and illustrate some key design insights.
Biological and artificial neural systems are composed of many local processors, and their capabilities depend upon the transfer function that relates each local processors outputs to its inputs. This paper uses a recent advance in the foundations of information theory to study the properties of local processors that use contextual input to amplify or attenuate transmission of information about their driving inputs. This advance enables the information transmitted by processors with two distinct inputs to be decomposed into those components unique to each input, that shared between the two inputs, and that which depends on both though it is in neither, i.e. synergy. The decompositions that we report here show that contextual modulation has information processing properties that contrast with those of all four simple arithmetic operators, that it can take various forms, and that the form used in our previous studies of artificial neural nets composed of local processors with both driving and contextual inputs is particularly well-suited to provide the distinctive capabilities of contextual modulation under a wide range of conditions. We argue that the decompositions reported here could be compared with those obtained from empirical neurobiological and psychophysical data under conditions thought to reflect contextual modulation. That would then shed new light on the underlying processes involved. Finally, we suggest that such decompositions could aid the design of context-sensitive machine learning algorithms.
We consider the problem of decomposing the total mutual information conveyed by a pair of predictor random variables about a target random variable into redundant, unique and synergistic contributions. We focus on the relationship between redundant information and the more familiar information-theoretic notions of common information. Our main contribution is an impossibility result. We show that for independent predictor random variables, any common information based measure of redundancy cannot induce a nonnegative decomposition of the total mutual information. Interestingly, this entails that any reasonable measure of redundant information cannot be derived by optimization over a single random variable.
Secret sharing is a cryptographic discipline in which the goal is to distribute information about a secret over a set of participants in such a way that only specific authorized combinations of participants together can reconstruct the secret. Thus, secret sharing schemes are systems of variables in which it is very clearly specified which subsets have information about the secret. As such, they provide perfect model systems for information decompositions. However, following this intuition too far leads to an information decomposition with negative partial information terms, which are difficult to interpret. One possible explanation is that the partial information lattice proposed by Williams and Beer is incomplete and has to be extended to incorporate terms corresponding to higher order redundancy. These results put bounds on information decompositions that follow the partial information framework, and they hint at where the partial information lattice needs to be improved.