ترغب بنشر مسار تعليمي؟ اضغط هنا

A Vector Generalization of Costas Entropy-Power Inequality with Applications

524   0   0.0 ( 0 )
 نشر من قبل Ruoheng Liu
 تاريخ النشر 2009
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

This paper considers an entropy-power inequality (EPI) of Costa and presents a natural vector generalization with a real positive semidefinite matrix parameter. This new inequality is proved using a perturbation approach via a fundamental relationship between the derivative of mutual information and the minimum mean-square error (MMSE) estimate in linear vector Gaussian channels. As an application, a new extremal entropy inequality is derived from the generalized Costa EPI and then used to establish the secrecy capacity regions of the degraded vector Gaussian broadcast channel with layered confidential messages.



قيم البحث

اقرأ أيضاً

An extension of the entropy power inequality to the form $N_r^alpha(X+Y) geq N_r^alpha(X) + N_r^alpha(Y)$ with arbitrary independent summands $X$ and $Y$ in $mathbb{R}^n$ is obtained for the Renyi entropy and powers $alpha geq (r+1)/2$.
Using a sharp version of the reverse Young inequality, and a Renyi entropy comparison result due to Fradelizi, Madiman, and Wang, the authors are able to derive Renyi entropy power inequalities for log-concave random vectors when Renyi parameters bel ong to $(0,1)$. Furthermore, the estimates are shown to be sharp up to absolute constants.
The paper establishes the equality condition in the I-MMSE proof of the entropy power inequality (EPI). This is done by establishing an exact expression for the deficit between the two sides of the EPI. Interestingly, a necessary condition for the eq uality is established by making a connection to the famous Cauchy functional equation.
We derive a lower bound on the smallest output entropy that can be achieved via vector quantization of a $d$-dimensional source with given expected $r$th-power distortion. Specialized to the one-dimensional case, and in the limit of vanishing distort ion, this lower bound converges to the output entropy achieved by a uniform quantizer, thereby recovering the result by Gish and Pierce that uniform quantizers are asymptotically optimal as the allowed distortion tends to zero. Our lower bound holds for all $d$-dimensional memoryless sources having finite differential entropy and whose integer part has finite entropy. In contrast to Gish and Pierce, we do not require any additional constraints on the continuity or decay of the source probability density function. For one-dimensional sources, the derivation of the lower bound reveals a necessary condition for a sequence of quantizers to be asymptotically optimal as the allowed distortion tends to zero. This condition implies that any sequence of asymptotically-optimal almost-regular quantizers must converge to a uniform quantizer as the allowed distortion tends to zero.
64 - Olivier Rioul , Ram Zamir 2019
The matrix version of the entropy-power inequality for real or complex coefficients and variables is proved using a transportation argument that easily settles the equality case. An application to blind source extraction is given.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا