ترغب بنشر مسار تعليمي؟ اضغط هنا

On MMSE Properties and I-MMSE Implications in Parallel MIMO Gaussian Channels

222   0   0.0 ( 0 )
 نشر من قبل Ronit Bustin
 تاريخ النشر 2010
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

This paper extends the single crossing point property of the scalar MMSE function, derived by Guo, Shamai and Verdu (first presented in ISIT 2008), to the parallel degraded MIMO scenario. It is shown that the matrix Q(t), which is the difference between the MMSE assuming a Gaussian input and the MMSE assuming an arbitrary input, has, at most, a single crossing point for each of its eigenvalues. Together with the I-MMSE relationship, a fundamental connection between Information Theory and Estimation Theory, this new property is employed to derive results in Information Theory. As a simple application of this property we provide an alternative converse proof for the broadcast channel (BC) capacity region under covariance constraint in this specific setting.



قيم البحث

اقرأ أيضاً

The scalar additive Gaussian noise channel has the single crossing point property between the minimum-mean square error (MMSE) in the estimation of the input given the channel output, assuming a Gaussian input to the channel, and the MMSE assuming an arbitrary input. This paper extends the result to the parallel MIMO additive Gaussian channel in three phases: i) The channel matrix is the identity matrix, and we limit the Gaussian input to a vector of Gaussian i.i.d. elements. The single crossing point property is with respect to the snr (as in the scalar case). ii) The channel matrix is arbitrary, the Gaussian input is limited to an independent Gaussian input. A single crossing point property is derived for each diagonal element of the MMSE matrix. iii) The Gaussian input is allowed to be an arbitrary Gaussian random vector. A single crossing point property is derived for each eigenvalue of the MMSE matrix. These three extensions are then translated to new information theoretic properties on the mutual information, using the fundamental relationship between estimation theory and information theory. The results of the last phase are also translated to a new property of Fishers information. Finally, the applicability of all three extensions on information theoretic problems is demonstrated through: a proof of a special case of Shannons vector EPI, a converse proof of the capacity region of the parallel degraded MIMO broadcast channel (BC) under per-antenna power constrains and under covariance constraints, and a converse proof of the capacity region of the compound parallel degraded MIMO BC under covariance constraint.
This work concerns the behavior of good (capacity achieving) codes in several multi-user settings in the Gaussian regime, in terms of their minimum mean-square error (MMSE) behavior. The settings investigated in this context include the Gaussian wire tap channel, the Gaussian broadcast channel (BC) and the Gaussian BC with confidential messages (BCC). In particular this work addresses the effects of transmitting such codes on unintended receivers, that is, receivers that neither require reliable decoding of the transmitted messages nor are they eavesdroppers that must be kept ignorant, to some extent, of the transmitted message. This work also examines the effect on the capacity region that occurs when we limit the allowed disturbance in terms of MMSE on some unintended receiver. This trade-off between the capacity region and the disturbance constraint is given explicitly for the Gaussian BC and the secrecy capacity region of the Gaussian BCC.
This paper considers a Gaussian channel with one transmitter and two receivers. The goal is to maximize the communication rate at the intended/primary receiver subject to a disturbance constraint at the unintended/secondary receiver. The disturbance is measured in terms of minimum mean square error (MMSE) of the interference that the transmission to the primary receiver inflicts on the secondary receiver. The paper presents a new upper bound for the problem of maximizing the mutual information subject to an MMSE constraint. The new bound holds for vector inputs of any length and recovers a previously known limiting (when the length of vector input tends to infinity) expression from the work of Bustin $textit{et al.}$ The key technical novelty is a new upper bound on the MMSE. This bound allows one to bound the MMSE for all signal-to-noise ratio (SNR) values $textit{below}$ a certain SNR at which the MMSE is known (which corresponds to the disturbance constraint). This bound complements the `single-crossing point property of the MMSE that upper bounds the MMSE for all SNR values $textit{above}$ a certain value at which the MMSE value is known. The MMSE upper bound provides a refined characterization of the phase-transition phenomenon which manifests, in the limit as the length of the vector input goes to infinity, as a discontinuity of the MMSE for the problem at hand. For vector inputs of size $n=1$, a matching lower bound, to within an additive gap of order $O left( log log frac{1}{sf MMSE} right)$ (where ${sf MMSE}$ is the disturbance constraint), is shown by means of the mixed inputs technique recently introduced by Dytso $textit{et al.}$
We examine codes, over the additive Gaussian noise channel, designed for reliable communication at some specific signal-to-noise ratio (SNR) and constrained by the permitted minimum mean-square error (MMSE) at lower SNRs. The maximum possible rate is below point-to-point capacity, and hence these are non-optimal codes (alternatively referred to as bad codes). We show that the maximum possible rate is the one attained by superposition codebooks. Moreover, the MMSE and mutual information behavior as a function of SNR, for any code attaining the maximum rate under the MMSE constraint, is known for all SNR. We also provide a lower bound on the MMSE for finite length codes, as a function of the error probability of the code.
The paper establishes the equality condition in the I-MMSE proof of the entropy power inequality (EPI). This is done by establishing an exact expression for the deficit between the two sides of the EPI. Interestingly, a necessary condition for the eq uality is established by making a connection to the famous Cauchy functional equation.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا