ترغب بنشر مسار تعليمي؟ اضغط هنا

Pointwise Partial Information Decomposition using the Specificity and Ambiguity Lattices

95   0   0.0 ( 0 )
 نشر من قبل Joseph Lizier
 تاريخ النشر 2018
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

What are the distinct ways in which a set of predictor variables can provide information about a target variable? When does a variable provide unique information, when do variables share redundant information, and when do variables combine synergistically to provide complementary information? The redundancy lattice from the partial information decomposition of Williams and Beer provided a promising glimpse at the answer to these questions. However, this structure was constructed using a much criticised measure of redundant information, and despite sustained research, no completely satisfactory replacement measure has been proposed. In this paper, we take a different approach, applying the axiomatic derivation of the redundancy lattice to a single realisation from a set of discrete variables. To overcome the difficulty associated with signed pointwise mutual information, we apply this decomposition separately to the unsigned entropic components of pointwise mutual information which we refer to as the specificity and ambiguity. This yields a separate redundancy lattice for each component. Then based upon an operational interpretation of redundancy, we define measures of redundant specificity and ambiguity enabling us to evaluate the partial information atoms in each lattice. These atoms can be recombined to yield the sought-after multivariate information decomposition. We apply this framework to canonical examples from the literature and discuss the results and the various properties of the decomposition. In particular, the pointwise decomposition using specificity and ambiguity satisfies a chain rule over target variables, which provides new insights into the so-called two-bit-copy example.



قيم البحث

اقرأ أيضاً

This paper examines how an event from one random variable provides pointwise mutual information about an event from another variable via probability mass exclusions. We start by introducing probability mass diagrams, which provide a visual representa tion of how a prior distribution is transformed to a posterior distribution through exclusions. With the aid of these diagrams, we identify two distinct types of probability mass exclusions---namely informative and misinformative exclusions. Then, motivated by Fanos derivation of the pointwise mutual information, we propose four postulates which aim to decompose the pointwise mutual information into two separate informational components: a non-negative term associated with the informative exclusion and a non-positive term associated with the misinformative exclusions. This yields a novel derivation of a familiar decomposition of the pointwise mutual information into entropic components. We conclude by discussing the relevance of considering information in terms of probability mass exclusions to the ongoing effort to decompose multivariate information.
The information that two random variables $Y$, $Z$ contain about a third random variable $X$ can have aspects of shared information (contained in both $Y$ and $Z$), of complementary information (only available from $(Y,Z)$ together) and of unique inf ormation (contained exclusively in either $Y$ or $Z$). Here, we study measures $widetilde{SI}$ of shared, $widetilde{UI}$ unique and $widetilde{CI}$ complementary information introduced by Bertschinger et al., which are motivated from a decision theoretic perspective. We find that in most cases the intuitive rule that more variables contain more information applies, with the exception that $widetilde{SI}$ and $widetilde{CI}$ information are not monotone in the target variable $X$. Additionally, we show that it is not possible to extend the bivariate information decomposition into $widetilde{SI}$, $widetilde{UI}$ and $widetilde{CI}$ to a non-negative decomposition on the partial information lattice of Williams and Beer. Nevertheless, the quantities $widetilde{UI}$, $widetilde{SI}$ and $widetilde{CI}$ have a well-defined interpretation, even in the multivariate setting.
The problem of determining the best achievable performance of arbitrary lossless compression algorithms is examined, when correlated side information is available at both the encoder and decoder. For arbitrary source-side information pairs, the condi tional information density is shown to provide a sharp asymptotic lower bound for the description lengths achieved by an arbitrary sequence of compressors. This implies that, for ergodic source-side information pairs, the conditional entropy rate is the best achievable asymptotic lower bound to the rate, not just in expectation but with probability one. Under appropriate mixing conditions, a central limit theorem and a law of the iterated logarithm are proved, describing the inevitable fluctuations of the second-order asymptotically best possible rate. An idealised version of Lempel-Ziv coding with side information is shown to be universally first- and second-order asymptotically optimal, under the same conditions. These results are in part based on a new almost-sure invariance principle for the conditional information density, which may be of independent interest.
The capacity of the semideterministic discrete memoryless broadcast channel (SD-BC) with partial message side-information (P-MSI) at the receivers is established. In the setting without a common message, it is shown that P-MSI to the stochastic recei ver alone can increase capacity, whereas P-MSI to the deterministic receiver can only increase capacity if also the stochastic receiver has P-MSI. The latter holds only for the setting without a common message: if the encoder also conveys a common message, then P-MSI to the deterministic receiver alone can increase capacity. These capacity results are used to show that feedback from the stochastic receiver can increase the capacity of the SD-BC without P-MSI and the sum-rate capacity of the SD-BC with P-MSI at the deterministic receiver. The link between P-MSI and feedback is a feedback code, which---roughly speaking---turns feedback into P-MSI at the stochastic receiver and hence helps the stochastic receiver mitigate experienced interference. For the case where the stochastic receiver has full MSI (F-MSI) and can thus fully mitigate experienced interference also in the absence of feedback, it is shown that feedback cannot increase capacity.
We offer a new approach to the information decomposition problem in information theory: given a target random variable co-distributed with multiple source variables, how can we decompose the mutual information into a sum of non-negative terms that qu antify the contributions of each random variable, not only individually but also in combination? We derive our composition from cooperative game theory. It can be seen as assigning a fair share of the mutual information to each combination of the source variables. Our decomposition is based on a different lattice from the usual partial information decomposition (PID) approach, and as a consequence our decomposition has a smaller number of terms: it has analogs of the synergy and unique information terms, but lacks terms corresponding to redundancy. Because of this, it is able to obey equivalents of the axioms known as local positivity and identity, which cannot be simultaneously satisfied by a PID measure.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا