ترغب بنشر مسار تعليمي؟ اضغط هنا

Characterization of Conditional Independence and Weak Realizations of Multivariate Gaussian Random Variables: Applications to Networks

210   0   0.0 ( 0 )
 نشر من قبل Charalambos Charalambous D.
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

The Gray and Wyner lossy source coding for a simple network for sources that generate a tuple of jointly Gaussian random variables (RVs) $X_1 : Omega rightarrow {mathbb R}^{p_1}$ and $X_2 : Omega rightarrow {mathbb R}^{p_2}$, with respect to square-error distortion at the two decoders is re-examined using (1) Hotellings geometric approach of Gaussian RVs-the canonical variable form, and (2) van Puttens and van Schuppens parametrization of joint distributions ${bf P}_{X_1, X_2, W}$ by Gaussian RVs $W : Omega rightarrow {mathbb R}^n $ which make $(X_1,X_2)$ conditionally independent, and the weak stochastic realization of $(X_1, X_2)$. Item (2) is used to parametrize the lossy rate region of the Gray and Wyner source coding problem for joint decoding with mean-square error distortions ${bf E}big{||X_i-hat{X}_i||_{{mathbb R}^{p_i}}^2 big}leq Delta_i in [0,infty], i=1,2$, by the covariance matrix of RV $W$. From this then follows Wyners common information $C_W(X_1,X_2)$ (information definition) is achieved by $W$ with identity covariance matrix, while a formula for Wyners lossy common information (operational definition) is derived, given by $C_{WL}(X_1,X_2)=C_W(X_1,X_2) = frac{1}{2} sum_{j=1}^n ln left( frac{1+d_j}{1-d_j} right),$ for the distortion region $ 0leq Delta_1 leq sum_{j=1}^n(1-d_j)$, $0leq Delta_2 leq sum_{j=1}^n(1-d_j)$, and where $1 > d_1 geq d_2 geq ldots geq d_n>0$ in $(0,1)$ are {em the canonical correlation coefficients} computed from the canonical variable form of the tuple $(X_1, X_2)$. The methods are of fundamental importance to other problems of multi-user communication, where conditional independence is imposed as a constraint.



قيم البحث

اقرأ أيضاً

Consider a channel ${bf Y}={bf X}+ {bf N}$ where ${bf X}$ is an $n$-dimensional random vector, and ${bf N}$ is a Gaussian vector with a covariance matrix ${bf mathsf{K}}_{bf N}$. The object under consideration in this paper is the conditional mean of ${bf X}$ given ${bf Y}={bf y}$, that is ${bf y} to E[{bf X}|{bf Y}={bf y}]$. Several identities in the literature connect $E[{bf X}|{bf Y}={bf y}]$ to other quantities such as the conditional variance, score functions, and higher-order conditional moments. The objective of this paper is to provide a unifying view of these identities. In the first part of the paper, a general derivative identity for the conditional mean is derived. Specifically, for the Markov chain ${bf U} leftrightarrow {bf X} leftrightarrow {bf Y}$, it is shown that the Jacobian of $E[{bf U}|{bf Y}={bf y}]$ is given by ${bf mathsf{K}}_{{bf N}}^{-1} {bf Cov} ( {bf X}, {bf U} | {bf Y}={bf y})$. In the second part of the paper, via various choices of ${bf U}$, the new identity is used to generalize many of the known identities and derive some new ones. First, a simple proof of the Hatsel and Nolte identity for the conditional variance is shown. Second, a simple proof of the recursive identity due to Jaffer is provided. Third, a new connection between the conditional cumulants and the conditional expectation is shown. In particular, it is shown that the $k$-th derivative of $E[X|Y=y]$ is the $(k+1)$-th conditional cumulant. The third part of the paper considers some applications. In a first application, the power series and the compositional inverse of $E[X|Y=y]$ are derived. In a second application, the distribution of the estimator error $(X-E[X|Y])$ is derived. In a third application, we construct consistent estimators (empirical Bayes estimators) of the conditional cumulants from an i.i.d. sequence $Y_1,...,Y_n$.
The authors have recently defined the Renyi information dimension rate $d({X_t})$ of a stationary stochastic process ${X_t,,tinmathbb{Z}}$ as the entropy rate of the uniformly-quantized process divided by minus the logarithm of the quantizer step siz e $1/m$ in the limit as $mtoinfty$ (B. Geiger and T. Koch, On the information dimension rate of stochastic processes, in Proc. IEEE Int. Symp. Inf. Theory (ISIT), Aachen, Germany, June 2017). For Gaussian processes with a given spectral distribution function $F_X$, they showed that the information dimension rate equals the Lebesgue measure of the set of harmonics where the derivative of $F_X$ is positive. This paper extends this result to multivariate Gaussian processes with a given matrix-valued spectral distribution function $F_{mathbf{X}}$. It is demonstrated that the information dimension rate equals the average rank of the derivative of $F_{mathbf{X}}$. As a side result, it is shown that the scale and translation invariance of information dimension carries over from random variables to stochastic processes.
We study a class of determinantal ideals that are related to conditional independence (CI) statements with hidden variables. Such CI statements correspond to determinantal conditions on a matrix whose entries are probabilities of events involving the observed random variables. We focus on an example that generalizes the CI ideals of the intersection axiom. In this example, the minimal primes are again determinantal ideals, which is not true in general.
Although Shannon mutual information has been widely used, its effective calculation is often difficult for many practical problems, including those in neural population coding. Asymptotic formulas based on Fisher information sometimes provide accurat e approximations to the mutual information but this approach is restricted to continuous variables because the calculation of Fisher information requires derivatives with respect to the encoded variables. In this paper, we consider information-theoretic bounds and approximations of the mutual information based on Kullback--Leibler divergence and R{e}nyi divergence. We propose several information metrics to approximate Shannon mutual information in the context of neural population coding. While our asymptotic formulas all work for discrete variables, one of them has consistent performance and high accuracy regardless of whether the encoded variables are discrete or continuous. We performed numerical simulations and confirmed that our approximation formulas were highly accurate for approximating the mutual information between the stimuli and the responses of a large neural population. These approximation formulas may potentially bring convenience to the applications of information theory to many practical and theoretical problems.
We consider a secure communication scenario through the two-user Gaussian interference channel: each transmitter (user) has a confidential message to send reliably to its intended receiver while keeping it secret from the other receiver. Prior work i nvestigated the performance of two different approaches for this scenario; i.i.d. Gaussian random codes and real alignment of structured codes. While the latter achieves the optimal sum secure degrees of freedom (s.d.o.f.), its extension to finite SNR regimes is challenging. In this paper, we propose a new achievability scheme for the weak and the moderately weak interference regimes, in which the reliability as well as the confidentiality of the transmitted messages are maintained at any finite SNR value. Our scheme uses lattice structure, structured jamming codewords, and lattice alignment in the encoding and the asymmetric compute-and-forward strategy in the decoding. We show that our lower bound on the sum secure rates scales linearly with log(SNR) and hence, it outperforms i.i.d. Gaussian random codes. Furthermore, we show that our achievable result is asymptotically optimal. Finally, we provide a discussion on an extension of our scheme to K>2 users.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا