ترغب بنشر مسار تعليمي؟ اضغط هنا

Strong Asymptotic Composition Theorems for Sibson Mutual Information

117   0   0.0 ( 0 )
 نشر من قبل Benjamin Wu
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We characterize the growth of the Sibson mutual information, of any order that is at least unity, between a random variable and an increasing set of noisy, conditionally independent observations of the random variable. The Sibson mutual information increases to an order-dependent limit exponentially fast, with an exponent that is order-independent. The result is contrasted with composition theorems in differential privacy.



قيم البحث

اقرأ أيضاً

175 - Chongjun Ouyang , Sheng Wu , 2019
To provide an efficient approach to characterize the input-output mutual information (MI) under additive white Gaussian noise (AWGN) channel, this short report fits the curves of exact MI under multilevel quadrature amplitude modulation (M-QAM) signa l inputs via multi-exponential decay curve fitting (M-EDCF). Even though the definition expression for instanious MI versus Signal to Noise Ratio (SNR) is complex and the containing integral is intractable, our new developed fitting formula holds a neat and compact form, which possesses high precision as well as low complexity. Generally speaking, this approximation formula of MI can promote the research of performance analysis in practical communication system under discrete inputs.
We develop an information-theoretic view of the stochastic block model, a popular statistical model for the large-scale structure of complex networks. A graph $G$ from such a model is generated by first assigning vertex labels at random from a finite alphabet, and then connecting vertices with edge probabilities depending on the labels of the endpoints. In the case of the symmetric two-group model, we establish an explicit `single-letter characterization of the per-vertex mutual information between the vertex labels and the graph. The explicit expression of the mutual information is intimately related to estimation-theoretic quantities, and --in particular-- reveals a phase transition at the critical point for community detection. Below the critical point the per-vertex mutual information is asymptotically the same as if edges were independent. Correspondingly, no algorithm can estimate the partition better than random guessing. Conversely, above the threshold, the per-vertex mutual information is strictly smaller than the independent-edges upper bound. In this regime there exists a procedure that estimates the vertex labels better than random guessing.
The mutual information between two jointly distributed random variables $X$ and $Y$ is a functional of the joint distribution $P_{XY},$ which is sometimes difficult to handle or estimate. A coarser description of the statistical behavior of $(X,Y)$ i s given by the marginal distributions $P_X, P_Y$ and the adjacency relation induced by the joint distribution, where $x$ and $y$ are adjacent if $P(x,y)>0$. We derive a lower bound on the mutual information in terms of these entities. The bound is obtained by viewing the channel from $X$ to $Y$ as a probability distribution on a set of possible actions, where an action determines the output for any possible input, and is independently drawn. We also provide an alternative proof based on convex optimization, that yields a generally tighter bound. Finally, we derive an upper bound on the mutual information in terms of adjacency events between the action and the pair $(X,Y)$, where in this case an action $a$ and a pair $(x,y)$ are adjacent if $y=a(x)$. As an example, we apply our bounds to the binary deletion channel and show that for the special case of an i.i.d. input distribution and a range of deletion probabilities, our lower and upper bounds both outperform the best known bounds for the mutual information.
105 - Reginald D. Smith 2015
A new method to measure nonlinear dependence between two variables is described using mutual information to analyze the separate linear and nonlinear components of dependence. This technique, which gives an exact value for the proportion of linear de pendence, is then compared with another common test for linearity, the Brock, Dechert and Scheinkman (BDS) test.
The Mutual Information (MI) is an often used measure of dependency between two random variables utilized in information theory, statistics and machine learning. Recently several MI estimators have been proposed that can achieve parametric MSE converg ence rate. However, most of the previously proposed estimators have the high computational complexity of at least $O(N^2)$. We propose a unified method for empirical non-parametric estimation of general MI function between random vectors in $mathbb{R}^d$ based on $N$ i.i.d. samples. The reduced complexity MI estimator, called the ensemble dependency graph estimator (EDGE), combines randomized locality sensitive hashing (LSH), dependency graphs, and ensemble bias-reduction methods. We prove that EDGE achieves optimal computational complexity $O(N)$, and can achieve the optimal parametric MSE rate of $O(1/N)$ if the density is $d$ times differentiable. To the best of our knowledge EDGE is the first non-parametric MI estimator that can achieve parametric MSE rates with linear time complexity. We illustrate the utility of EDGE for the analysis of the information plane (IP) in deep learning. Using EDGE we shed light on a controversy on whether or not the compression property of information bottleneck (IB) in fact holds for ReLu and other rectification functions in deep neural networks (DNN).
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا