ترغب بنشر مسار تعليمي؟ اضغط هنا

Characterising information-theoretic storage and transfer in continuous time processes

416   0   0.0 ( 0 )
 نشر من قبل Richard Spinney
 تاريخ النشر 2018
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

The characterisation of information processing is an important task in complex systems science. Information dynamics is a quantitative methodology for modelling the intrinsic information processing conducted by a process represented as a time series, but to date has only been formulated in discrete time. Building on previous work which demonstrated how to formulate transfer entropy in continuous time, we give a total account of information processing in this setting, incorporating information storage. We find that a convergent rate of predictive capacity, comprised of the transfer entropy and active information storage, does not exist, arising through divergent rates of active information storage. We identify that active information storage can be decomposed into two separate quantities that characterise predictive capacity stored in a process: active memory utilisation and instantaneous predictive capacity. The latter involves prediction related to path regularity and so solely inherits the divergent properties of the active information storage, whilst the former permits definitions of pathwise and rate quantities. We formulate measures of memory utilisation for jump and neural spiking processes and illustrate measures of information processing in synthetic neural spiking models and coupled Ornstein-Uhlenbeck models. The application to synthetic neural spiking models demonstrates that active memory utilisation for point processes consists of discontinuous jump contributions (at spikes) interrupting a continuously varying contribution (relating to waiting times between spikes), complementing the behaviour previously demonstrated for transfer entropy in these processes.



قيم البحث

اقرأ أيضاً

Transfer entropy has been used to quantify the directed flow of information between source and target variables in many complex systems. While transfer entropy was originally formulated in discrete time, in this paper we provide a framework for consi dering transfer entropy in continuous time systems, based on Radon-Nikodym derivatives between measures of complete path realizations. To describe the information dynamics of individual path realizations, we introduce the pathwise transfer entropy, the expectation of which is the transfer entropy accumulated over a finite time interval. We demonstrate that this formalism permits an instantaneous transfer entropy rate. These properties are analogous to the behavior of physical quantities defined along paths such as work and heat. We use this approach to produce an explicit form for the transfer entropy for pure jump processes, and highlight the simplified form in the specific case of point processes (frequently used in neuroscience to model neural spike trains). Finally, we present two synthetic spiking neuron model examples to exhibit the pertinent features of our formalism, namely, that the information flow for point processes consists of discontinuous jump contributions (at spikes in the target) interrupting a continuously varying contribution (relating to waiting times between target spikes). Numerical schemes based on our formalism promise significant benefits over existing strategies based on discrete time formalisms.
Motivated by applications in distributed storage, the storage capacity of a graph was recently defined to be the maximum amount of information that can be stored across the vertices of a graph such that the information at any vertex can be recovered from the information stored at the neighboring vertices. Computing the storage capacity is a fundamental problem in network coding and is related, or equivalent, to some well-studied problems such as index coding with side information and generalized guessing games. In this paper, we consider storage capacity as a natural information-theoretic analogue of the minimum vertex cover of a graph. Indeed, while it was known that storage capacity is upper bounded by minimum vertex cover, we show that by treating it as such we can get a 3/2 approximation for planar graphs, and a 4/3 approximation for triangle-free planar graphs. Since the storage capacity is intimately related to the index coding rate, we get a 2 approximation of index coding rate for planar graphs and 3/2 approximation for triangle-free planar graphs. We also show a polynomial time approximation scheme for the index coding rate when the alphabet size is constant. We then develop a general method of gadget covering to upper bound the storage capacity in terms of the average of a set of vertex covers. This method is intuitive and leads to the exact characterization of storage capacity for various families of graphs. As an illustrative example, we use this approach to derive the exact storage capacity of cycles-with-chords, a family of graphs related to outerplanar graphs. Finally, we generalize the storage capacity notion to include recovery from partial node failures in distributed storage. We show tight upper and lower bounds on this partial recovery capacity that scales nicely with the fraction of failures in a vertex.
125 - Yuanpeng Liu , Elza Erkip 2015
In a two-user channel, completion time refers to the number of channel uses spent by each user to transmit a bit pool with some given size. In this paper, the information-theoretic formulation of completion time is based on the concept of constrained rates, where users are allowed to employ different numbers of channel uses for transmission as opposed to the equal channel use of the standard information-theoretic formulation. Analogous to the capacity region, the completion time region characterizes all possible trade-offs among users completion times. For a multi-access channel, it is shown that the completion time region is achieved by operating the channel in two independent phases: a multi-access phase when both users are transmitting, and a point-to-point phase when one user has finished and the other is still transmitting. Using a similar two-phase approach, the completion time region (or inner and outer bounds) is established for a Gaussian broadcast channel and a Gaussian interference channel. It is observed that although consisting of two convex subregions, the completion time region may not be convex in general. Finally an optimization problem of minimizing the weighted sum completion time for a Gaussian multi-access channel and a Gaussian broadcast channel is solved, demonstrating the utility of the completion time approach.
We introduce an axiomatic approach to entropies and relative entropies that relies only on minimal information-theoretic axioms, namely monotonicity under mixing and data-processing as well as additivity for product distributions. We find that these axioms induce sufficient structure to establish continuity in the interior of the probability simplex and meaningful upper and lower bounds, e.g., we find that every relative entropy must lie between the Renyi divergences of order $0$ and $infty$. We further show simple conditions for positive definiteness of such relative entropies and a characterisation in term of a variant of relative trumping. Our main result is a one-to-one correspondence between entropies and relative entropies.
Biological and artificial neural systems are composed of many local processors, and their capabilities depend upon the transfer function that relates each local processors outputs to its inputs. This paper uses a recent advance in the foundations of information theory to study the properties of local processors that use contextual input to amplify or attenuate transmission of information about their driving inputs. This advance enables the information transmitted by processors with two distinct inputs to be decomposed into those components unique to each input, that shared between the two inputs, and that which depends on both though it is in neither, i.e. synergy. The decompositions that we report here show that contextual modulation has information processing properties that contrast with those of all four simple arithmetic operators, that it can take various forms, and that the form used in our previous studies of artificial neural nets composed of local processors with both driving and contextual inputs is particularly well-suited to provide the distinctive capabilities of contextual modulation under a wide range of conditions. We argue that the decompositions reported here could be compared with those obtained from empirical neurobiological and psychophysical data under conditions thought to reflect contextual modulation. That would then shed new light on the underlying processes involved. Finally, we suggest that such decompositions could aid the design of context-sensitive machine learning algorithms.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا