ترغب بنشر مسار تعليمي؟ اضغط هنا

Computing the Unique Information

208   0   0.0 ( 0 )
 نشر من قبل Pradeep Kr. Banerjee
 تاريخ النشر 2017
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Given a pair of predictor variables and a response variable, how much information do the predictors have about the response, and how is this information distributed between unique, redundant, and synergistic components? Recent work has proposed to quantify the unique component of the decomposition as the minimum value of the conditional mutual information over a constrained set of information channels. We present an efficient iterative divergence minimization algorithm to solve this optimization problem with convergence guarantees and evaluate its performance against other techniques.

قيم البحث

اقرأ أيضاً

We study the measure of unique information $UI(T:Xsetminus Y)$ defined by Bertschinger et al. (2014) within the framework of information decompositions. We study uniqueness and support of the solutions to the optimization problem underlying the defin ition of $UI$. We identify sufficient conditions for non-uniqueness of solutions with full support in terms of conditional independence constraints and in terms of the cardinalities of $T$, $X$ and $Y$. Our results are based on a reformulation of the first order conditions on the objective function as rank constraints on a matrix of conditional probabilities. These results help to speed up the computation of $UI(T:Xsetminus Y)$, most notably when $T$ is binary. In the case that all variables are binary, we obtain a complete picture of where the optimizing probability distributions lie.
The information that two random variables $Y$, $Z$ contain about a third random variable $X$ can have aspects of shared information (contained in both $Y$ and $Z$), of complementary information (only available from $(Y,Z)$ together) and of unique inf ormation (contained exclusively in either $Y$ or $Z$). Here, we study measures $widetilde{SI}$ of shared, $widetilde{UI}$ unique and $widetilde{CI}$ complementary information introduced by Bertschinger et al., which are motivated from a decision theoretic perspective. We find that in most cases the intuitive rule that more variables contain more information applies, with the exception that $widetilde{SI}$ and $widetilde{CI}$ information are not monotone in the target variable $X$. Additionally, we show that it is not possible to extend the bivariate information decomposition into $widetilde{SI}$, $widetilde{UI}$ and $widetilde{CI}$ to a non-negative decomposition on the partial information lattice of Williams and Beer. Nevertheless, the quantities $widetilde{UI}$, $widetilde{SI}$ and $widetilde{CI}$ have a well-defined interpretation, even in the multivariate setting.
Under the paradigm of caching, partial data is delivered before the actual requests of users are known. In this paper, this problem is modeled as a canonical distributed source coding problem with side information, where the side information represen ts the users requests. For the single-user case, a single-letter characterization of the optimal rate region is established, and for several important special cases, closed-form solutions are given, including the scenario of uniformly distributed user requests. In this case, it is shown that the optimal caching strategy is closely related to total correlation and Wyners common information. Using the insight gained from the single-user case, three two-user scenarios admitting single-letter characterization are considered, which draw connections to existing source coding problems in the literature: the Gray--Wyner system and distributed successive refinement. Finally, the model studied by Maddah-Ali and Niesen is rephrased to make a comparison with the considered information-theoretic model. Although the two caching models have a similar behavior for the single-user case, it is shown through a two-user example that the two caching models behave differently in general.
The unique information ($UI$) is an information measure that quantifies a deviation from the Blackwell order. We have recently shown that this quantity is an upper bound on the one-way secret key rate. In this paper, we prove a triangle inequality fo r the $UI$, which implies that the $UI$ is never greater than one of the best known upper bounds on the two-way secret key rate. We conjecture that the $UI$ lower bounds the two-way rate and discuss implications of the conjecture.
The partial information decomposition (PID) is a promising framework for decomposing a joint random variable into the amount of influence each source variable Xi has on a target variable Y, relative to the other sources. For two sources, influence br eaks down into the information that both X0 and X1 redundantly share with Y, what X0 uniquely shares with Y, what X1 uniquely shares with Y, and finally what X0 and X1 synergistically share with Y. Unfortunately, considerable disagreement has arisen as to how these four components should be quantified. Drawing from cryptography, we consider the secret key agreement rate as an operational method of quantifying unique informations. Secret key agreement rate comes in several forms, depending upon which parties are permitted to communicate. We demonstrate that three of these four forms are inconsistent with the PID. The remaining form implies certain interpretations as to the PIDs meaning---interpretations not present in PIDs definition but that, we argue, need to be explicit. These reveal an inconsistency between third-order connected information, two-way secret key agreement rate, and synergy. Similar difficulties arise with a popular PID measure in light the results here as well as from a maximum entropy viewpoint. We close by reviewing the challenges facing the PID.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا