ترغب بنشر مسار تعليمي؟ اضغط هنا

Achieving AWGN Channel Capacity With Lattice Gaussian Coding

85   0   0.0 ( 0 )
 نشر من قبل Cong Ling
 تاريخ النشر 2013
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We propose a new coding scheme using only one lattice that achieves the $frac{1}{2}log(1+SNR)$ capacity of the additive white Gaussian noise (AWGN) channel with lattice decoding, when the signal-to-noise ratio $SNR>e-1$. The scheme applies a discrete Gaussian distribution over an AWGN-good lattice, but otherwise does not require a shaping lattice or dither. Thus, it significantly simplifies the default lattice coding scheme of Erez and Zamir which involves a quantization-good lattice as well as an AWGN-good lattice. Using the flatness factor, we show that the error probability of the proposed scheme under minimum mean-square error (MMSE) lattice decoding is almost the same as that of Erez and Zamir, for any rate up to the AWGN channel capacity. We introduce the notion of good constellations, which carry almost the same mutual information as that of continuous Gaussian inputs. We also address the implementation of Gaussian shaping for the proposed lattice Gaussian coding scheme.

قيم البحث

اقرأ أيضاً

96 - Jialing Liu , Nicola Elia , 2010
In this paper, we propose capacity-achieving communication schemes for Gaussian finite-state Markov channels (FSMCs) subject to an average channel input power constraint, under the assumption that the transmitters can have access to delayed noiseless output feedback as well as instantaneous or delayed channel state information (CSI). We show that the proposed schemes reveals connections between feedback communication and feedback control.
108 - Behzad Asadi , Lawrence Ong , 2014
This paper investigates the capacity region of the three-receiver AWGN broadcast channel where the receivers (i) have private-message requests and (ii) may know some of the messages requested by other receivers as side information. We first classify all 64 possible side information configurations into eight groups, each consisting of eight members. We next construct transmission schemes, and derive new inner and outer bounds for the groups. This establishes the capacity region for 52 out of 64 possible side information configurations. For six groups (i.e., groups 1, 2, 3, 5, 6, and 8 in our terminology), we establish the capacity region for all their members, and show that it tightens both the best known inner and outer bounds. For group 4, our inner and outer bounds tighten the best known inner bound and/or outer bound for all the group members. Moreover, our bounds coincide at certain regions, which can be characterized by two thresholds. For group 7, our inner and outer bounds coincide for four members, thereby establishing the capacity region. For the remaining four members, our bounds tighten both the best known inner and outer bounds.
This paper investigates the capacity and capacity per unit cost of Gaussian multiple access-channel (GMAC) with peak power constraints. We first devise an approach based on Blahut-Arimoto Algorithm to numerically optimize the sum rate and quantify th e corresponding input distributions. The results reveal that in the case with identical peak power constraints, the user with higher SNR is to have a symmetric antipodal input distribution for all values of noise variance. Next, we analytically derive and characterize an achievable rate region for the capacity in cases with small peak power constraints, which coincides with the capacity in a certain scenario. The capacity per unit cost is of interest in low power regimes and is a target performance measure in energy efficient communications. In this work, we derive the capacity per unit cost of additive white Gaussian channel and GMAC with peak power constraints. The results in case of GMAC demonstrate that the capacity per unit cost is obtained using antipodal signaling for both users and is independent of users rate ratio. We characterize the optimized transmission strategies obtained for capacity and capacity per unit cost with peak-power constraint in detail and specifically in contrast to the settings with average-power constraints.
This paper studies an $n$-dimensional additive Gaussian noise channel with a peak-power-constrained input. It is well known that, in this case, when $n=1$ the capacity-achieving input distribution is discrete with finitely many mass points, and whe n $n>1$ the capacity-achieving input distribution is supported on finitely many concentric shells. However, due to the previous proof technique, neither the exact number of mass points/shells of the optimal input distribution nor a bound on it was available. This paper provides an alternative proof of the finiteness of the number mass points/shells of the capacity-achieving input distribution and produces the first firm bounds on the number of mass points and shells, paving an alternative way for approaching many such problems. Roughly, the paper consists of three parts. The first part considers the case of $n=1$. The first result, in this part, shows that the number of mass points in the capacity-achieving input distribution is within a factor of two from the downward shifted capacity-achieving output probability density function (pdf). The second result, by showing a bound on the number of zeros of the downward shifted capacity-achieving output pdf, provides a first firm upper on the number of mass points. Specifically, it is shown that the number of mass points is given by $O(mathsf{A}^2)$ where $mathsf{A}$ is the constraint on the input amplitude. The second part generalizes the results of the first part to the case of $n>1$. In particular, for every dimension $n>1$, it is shown that the number of shells is given by $O(mathsf{A}^2)$ where $mathsf{A}$ is the constraint on the input amplitude. Finally, the third part provides bounds on the number of points for the case of $n=1$ with an additional power constraint.
81 - Tobias Koch 2014
This paper studies the capacity of the peak-and-average-power-limited Gaussian channel when its output is quantized using a dithered, infinite-level, uniform quantizer of step size $Delta$. It is shown that the capacity of this channel tends to that of the unquantized Gaussian channel when $Delta$ tends to zero, and it tends to zero when $Delta$ tends to infinity. In the low signal-to-noise ratio (SNR) regime, it is shown that, when the peak-power constraint is absent, the low-SNR asymptotic capacity is equal to that of the unquantized channel irrespective of $Delta$. Furthermore, an expression for the low-SNR asymptotic capacity for finite peak-to-average-power ratios is given and evaluated in the low- and high-resolution limit. It is demonstrated that, in this case, the low-SNR asymptotic capacity converges to that of the unquantized channel when $Delta$ tends to zero, and it tends to zero when $Delta$ tends to infinity. Comparing these results with achievability results for (undithered) 1-bit quantization, it is observed that the dither reduces capacity in the low-precision limit, and it reduces the low-SNR asymptotic capacity unless the peak-to-average-power ratio is unbounded.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا