ترغب بنشر مسار تعليمي؟ اضغط هنا

On the Minimum Mean $p$-th Error in Gaussian Noise Channels and its Applications

267   0   0.0 ( 0 )
 نشر من قبل Alex Dytso
 تاريخ النشر 2016
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

The problem of estimating an arbitrary random vector from its observation corrupted by additive white Gaussian noise, where the cost function is taken to be the Minimum Mean $p$-th Error (MMPE), is considered. The classical Minimum Mean Square Error (MMSE) is a special case of the MMPE. Several bounds, properties and applications of the MMPE are derived and discussed. The optimal MMPE estimator is found for Gaussian and binary input distributions. Properties of the MMPE as a function of the input distribution, SNR and order $p$ are derived. In particular, it is shown that the MMPE is a continuous function of $p$ and SNR. These results are possible in view of interpolation and change of measure bounds on the MMPE. The `Single-Crossing-Point Property (SCPP) that bounds the MMSE for all SNR values {it above} a certain value, at which the MMSE is known, together with the I-MMSE relationship is a powerful tool in deriving converse proofs in information theory. By studying the notion of conditional MMPE, a unifying proof (i.e., for any $p$) of the SCPP is shown. A complementary bound to the SCPP is then shown, which bounds the MMPE for all SNR values {it below} a certain value, at which the MMPE is known. As a first application of the MMPE, a bound on the conditional differential entropy in terms of the MMPE is provided, which then yields a generalization of the Ozarow-Wyner lower bound on the mutual information achieved by a discrete input on a Gaussian noise channel. As a second application, the MMPE is shown to improve on previous characterizations of the phase transition phenomenon that manifests, in the limit as the length of the capacity achieving code goes to infinity, as a discontinuity of the MMSE as a function of SNR. As a final application, the MMPE is used to show bounds on the second derivative of mutual information, that tighten previously known bounds.



قيم البحث

اقرأ أيضاً

Consider a channel ${bf Y}={bf X}+ {bf N}$ where ${bf X}$ is an $n$-dimensional random vector, and ${bf N}$ is a Gaussian vector with a covariance matrix ${bf mathsf{K}}_{bf N}$. The object under consideration in this paper is the conditional mean of ${bf X}$ given ${bf Y}={bf y}$, that is ${bf y} to E[{bf X}|{bf Y}={bf y}]$. Several identities in the literature connect $E[{bf X}|{bf Y}={bf y}]$ to other quantities such as the conditional variance, score functions, and higher-order conditional moments. The objective of this paper is to provide a unifying view of these identities. In the first part of the paper, a general derivative identity for the conditional mean is derived. Specifically, for the Markov chain ${bf U} leftrightarrow {bf X} leftrightarrow {bf Y}$, it is shown that the Jacobian of $E[{bf U}|{bf Y}={bf y}]$ is given by ${bf mathsf{K}}_{{bf N}}^{-1} {bf Cov} ( {bf X}, {bf U} | {bf Y}={bf y})$. In the second part of the paper, via various choices of ${bf U}$, the new identity is used to generalize many of the known identities and derive some new ones. First, a simple proof of the Hatsel and Nolte identity for the conditional variance is shown. Second, a simple proof of the recursive identity due to Jaffer is provided. Third, a new connection between the conditional cumulants and the conditional expectation is shown. In particular, it is shown that the $k$-th derivative of $E[X|Y=y]$ is the $(k+1)$-th conditional cumulant. The third part of the paper considers some applications. In a first application, the power series and the compositional inverse of $E[X|Y=y]$ are derived. In a second application, the distribution of the estimator error $(X-E[X|Y])$ is derived. In a third application, we construct consistent estimators (empirical Bayes estimators) of the conditional cumulants from an i.i.d. sequence $Y_1,...,Y_n$.
The minimum mean-square error (MMSE) achievable by optimal estimation of a random variable $Yinmathbb{R}$ given another random variable $Xinmathbb{R}^{d}$ is of much interest in a variety of statistical contexts. In this paper we propose two estimato rs for the MMSE, one based on a two-layer neural network and the other on a special three-layer neural network. We derive lower bounds for the MMSE based on the proposed estimators and the Barron constant of an appropriate function of the conditional expectation of $Y$ given $X$. Furthermore, we derive a general upper bound for the Barron constant that, when $Xinmathbb{R}$ is post-processed by the additive Gaussian mechanism, produces order optimal estimates in the large noise regime.
56 - Xin Zhang , S.H.Song 2021
The mutual information (MI) of Gaussian multi-input multi-output (MIMO) channels has been evaluated by utilizing random matrix theory (RMT) and shown to asymptotically follow Gaussian distribution, where the ergodic mutual information (EMI) converges to a deterministic quantity. However, with non-Gaussian channels, there is a bias between the EMI and its deterministic equivalent (DE), whose evaluation is not available in the literature. This bias of the EMI is related to the bias for the trace of the resolvent in large RMT. In this paper, we first derive the bias for the trace of the resolvent, which is further extended to compute the bias for the linear spectral statistics (LSS). Then, we apply the above results on non-Gaussian MIMO channels to determine the bias for the EMI. It is also proved that the bias for the EMI is -0.5 times of that for the variance of the MI. Finally, the derived bias is utilized to modify the central limit theory (CLT) and approximate the outage probability. Numerical results show that the modified CLT significantly outperforms the previous results in approximating the distribution of the MI and can accurately determine the outage probability.
In this paper, we focus on the two-user Gaussian interference channel (GIC), and study the Han-Kobayashi (HK) coding/decoding strategy with the objective of designing low-density parity-check (LDPC) codes. A code optimization algorithm is proposed wh ich adopts a random perturbation technique via tracking the average mutual information. The degree distribution optimization and convergence threshold computation are carried out for strong and weak interference channels, employing binary phase-shift keying (BPSK). Under strong interference, it is observed that optimized codes operate close to the capacity boundary. For the case of weak interference, it is shown that via the newly designed codes, a nontrivial rate pair is achievable, which is not attainable by single user codes with time-sharing. Performance of the designed LDPC codes are also studied for finite block lengths through simulations of specific codes picked from the optimized degree distributions.
The capacity-achieving input distribution of the discrete-time, additive white Gaussian noise (AWGN) channel with an amplitude constraint is discrete and seems difficult to characterize explicitly. A dual capacity expression is used to derive analyti c capacity upper bounds for scalar and vector AWGN channels. The scalar bound improves on McKellips bound and is within 0.1 bits of capacity for all signal-to-noise ratios (SNRs). The two-dimensional bound is within 0.15 bits of capacity provably up to 4.5 dB, and numerical evidence suggests a similar gap for all SNRs.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا