Do you want to publish a course? Click here

A General Derivative Identity for the Conditional Mean Estimator in Gaussian Noise and Some Applications

129   0   0.0 ( 0 )
 Added by Alex Dytso
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

Consider a channel ${bf Y}={bf X}+ {bf N}$ where ${bf X}$ is an $n$-dimensional random vector, and ${bf N}$ is a Gaussian vector with a covariance matrix ${bf mathsf{K}}_{bf N}$. The object under consideration in this paper is the conditional mean of ${bf X}$ given ${bf Y}={bf y}$, that is ${bf y} to E[{bf X}|{bf Y}={bf y}]$. Several identities in the literature connect $E[{bf X}|{bf Y}={bf y}]$ to other quantities such as the conditional variance, score functions, and higher-order conditional moments. The objective of this paper is to provide a unifying view of these identities. In the first part of the paper, a general derivative identity for the conditional mean is derived. Specifically, for the Markov chain ${bf U} leftrightarrow {bf X} leftrightarrow {bf Y}$, it is shown that the Jacobian of $E[{bf U}|{bf Y}={bf y}]$ is given by ${bf mathsf{K}}_{{bf N}}^{-1} {bf Cov} ( {bf X}, {bf U} | {bf Y}={bf y})$. In the second part of the paper, via various choices of ${bf U}$, the new identity is used to generalize many of the known identities and derive some new ones. First, a simple proof of the Hatsel and Nolte identity for the conditional variance is shown. Second, a simple proof of the recursive identity due to Jaffer is provided. Third, a new connection between the conditional cumulants and the conditional expectation is shown. In particular, it is shown that the $k$-th derivative of $E[X|Y=y]$ is the $(k+1)$-th conditional cumulant. The third part of the paper considers some applications. In a first application, the power series and the compositional inverse of $E[X|Y=y]$ are derived. In a second application, the distribution of the estimator error $(X-E[X|Y])$ is derived. In a third application, we construct consistent estimators (empirical Bayes estimators) of the conditional cumulants from an i.i.d. sequence $Y_1,...,Y_n$.



rate research

Read More

The problem of estimating an arbitrary random vector from its observation corrupted by additive white Gaussian noise, where the cost function is taken to be the Minimum Mean $p$-th Error (MMPE), is considered. The classical Minimum Mean Square Error (MMSE) is a special case of the MMPE. Several bounds, properties and applications of the MMPE are derived and discussed. The optimal MMPE estimator is found for Gaussian and binary input distributions. Properties of the MMPE as a function of the input distribution, SNR and order $p$ are derived. In particular, it is shown that the MMPE is a continuous function of $p$ and SNR. These results are possible in view of interpolation and change of measure bounds on the MMPE. The `Single-Crossing-Point Property (SCPP) that bounds the MMSE for all SNR values {it above} a certain value, at which the MMSE is known, together with the I-MMSE relationship is a powerful tool in deriving converse proofs in information theory. By studying the notion of conditional MMPE, a unifying proof (i.e., for any $p$) of the SCPP is shown. A complementary bound to the SCPP is then shown, which bounds the MMPE for all SNR values {it below} a certain value, at which the MMPE is known. As a first application of the MMPE, a bound on the conditional differential entropy in terms of the MMPE is provided, which then yields a generalization of the Ozarow-Wyner lower bound on the mutual information achieved by a discrete input on a Gaussian noise channel. As a second application, the MMPE is shown to improve on previous characterizations of the phase transition phenomenon that manifests, in the limit as the length of the capacity achieving code goes to infinity, as a discontinuity of the MMSE as a function of SNR. As a final application, the MMPE is used to show bounds on the second derivative of mutual information, that tighten previously known bounds.
The Gray and Wyner lossy source coding for a simple network for sources that generate a tuple of jointly Gaussian random variables (RVs) $X_1 : Omega rightarrow {mathbb R}^{p_1}$ and $X_2 : Omega rightarrow {mathbb R}^{p_2}$, with respect to square-error distortion at the two decoders is re-examined using (1) Hotellings geometric approach of Gaussian RVs-the canonical variable form, and (2) van Puttens and van Schuppens parametrization of joint distributions ${bf P}_{X_1, X_2, W}$ by Gaussian RVs $W : Omega rightarrow {mathbb R}^n $ which make $(X_1,X_2)$ conditionally independent, and the weak stochastic realization of $(X_1, X_2)$. Item (2) is used to parametrize the lossy rate region of the Gray and Wyner source coding problem for joint decoding with mean-square error distortions ${bf E}big{||X_i-hat{X}_i||_{{mathbb R}^{p_i}}^2 big}leq Delta_i in [0,infty], i=1,2$, by the covariance matrix of RV $W$. From this then follows Wyners common information $C_W(X_1,X_2)$ (information definition) is achieved by $W$ with identity covariance matrix, while a formula for Wyners lossy common information (operational definition) is derived, given by $C_{WL}(X_1,X_2)=C_W(X_1,X_2) = frac{1}{2} sum_{j=1}^n ln left( frac{1+d_j}{1-d_j} right),$ for the distortion region $ 0leq Delta_1 leq sum_{j=1}^n(1-d_j)$, $0leq Delta_2 leq sum_{j=1}^n(1-d_j)$, and where $1 > d_1 geq d_2 geq ldots geq d_n>0$ in $(0,1)$ are {em the canonical correlation coefficients} computed from the canonical variable form of the tuple $(X_1, X_2)$. The methods are of fundamental importance to other problems of multi-user communication, where conditional independence is imposed as a constraint.
In this paper study the problem of signal detection in Gaussian noise in a distributed setting. We derive a lower bound on the size that the signal needs to have in order to be detectable. Moreover, we exhibit optimal distributed testing strategies that attain the lower bound.
In this paper, we newly present a closed-form bit-error rate (BER) expression for an $M$-ary pulse-amplitude modulation ($M$-PAM) over additive white Gaussian noise (AWGN) channels by analytically characterizing the bit decision regions and positions. The obtained expression is then used to derive the conditional BER of a rectangular quadrature amplitude modulation (QAM) for a given value of phase noise. Numerical results show that the impact of phase noise on the conditional BER performance is proportional to the constellation size. Moreover, it is observed that given a constellation size, the square QAM achieves the lowest phase noise-induced performance loss compared to other rectangular constellations.
336 - Tal Gariby , Uri Erez 2019
The problem of constructing lattices such that their quantization noise approaches a desired distribution is studied. It is shown that asymptotically is the dimension, lattice quantization noise can approach a broad family of distribution functions with independent and identically distributed components.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا