ترغب بنشر مسار تعليمي؟ اضغط هنا

Properties of the Support of the Capacity-Achieving Distribution of the Amplitude-Constrained Poisson Noise Channel

176   0   0.0 ( 0 )
 نشر من قبل Alex Dytso
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

This work considers a Poisson noise channel with an amplitude constraint. It is well-known that the capacity-achieving input distribution for this channel is discrete with finitely many points. We sharpen this result by introducing upper and lower bounds on the number of mass points. Concretely, an upper bound of order $mathsf{A} log^2(mathsf{A})$ and a lower bound of order $sqrt{mathsf{A}}$ are established where $mathsf{A}$ is the constraint on the input amplitude. In addition, along the way, we show several other properties of the capacity and capacity-achieving distribution. For example, it is shown that the capacity is equal to $ - log P_{Y^star}(0)$ where $P_{Y^star}$ is the optimal output distribution. Moreover, an upper bound on the values of the probability masses of the capacity-achieving distribution and a lower bound on the probability of the largest mass point are established. Furthermore, on the per-symbol basis, a nonvanishing lower bound on the probability of error for detecting the capacity-achieving distribution is established under the maximum a posteriori rule.

قيم البحث

اقرأ أيضاً

This paper studies an $n$-dimensional additive Gaussian noise channel with a peak-power-constrained input. It is well known that, in this case, when $n=1$ the capacity-achieving input distribution is discrete with finitely many mass points, and whe n $n>1$ the capacity-achieving input distribution is supported on finitely many concentric shells. However, due to the previous proof technique, neither the exact number of mass points/shells of the optimal input distribution nor a bound on it was available. This paper provides an alternative proof of the finiteness of the number mass points/shells of the capacity-achieving input distribution and produces the first firm bounds on the number of mass points and shells, paving an alternative way for approaching many such problems. Roughly, the paper consists of three parts. The first part considers the case of $n=1$. The first result, in this part, shows that the number of mass points in the capacity-achieving input distribution is within a factor of two from the downward shifted capacity-achieving output probability density function (pdf). The second result, by showing a bound on the number of zeros of the downward shifted capacity-achieving output pdf, provides a first firm upper on the number of mass points. Specifically, it is shown that the number of mass points is given by $O(mathsf{A}^2)$ where $mathsf{A}$ is the constraint on the input amplitude. The second part generalizes the results of the first part to the case of $n>1$. In particular, for every dimension $n>1$, it is shown that the number of shells is given by $O(mathsf{A}^2)$ where $mathsf{A}$ is the constraint on the input amplitude. Finally, the third part provides bounds on the number of points for the case of $n=1$ with an additional power constraint.
In this work, novel upper and lower bounds for the capacity of channels with arbitrary constraints on the support of the channel input symbols are derived. As an immediate practical application, the case of multiple-input multiple-output channels wit h amplitude constraints is considered. The bounds are shown to be within a constant gap if the channel matrix is invertible and are tight in the high amplitude regime for arbitrary channel matrices. Moreover, in the high amplitude regime, it is shown that the capacity scales linearly with the minimum between the number of transmit and receive antennas, similarly to the case of average power-constrained inputs.
The capacity-achieving input distribution of the discrete-time, additive white Gaussian noise (AWGN) channel with an amplitude constraint is discrete and seems difficult to characterize explicitly. A dual capacity expression is used to derive analyti c capacity upper bounds for scalar and vector AWGN channels. The scalar bound improves on McKellips bound and is within 0.1 bits of capacity for all signal-to-noise ratios (SNRs). The two-dimensional bound is within 0.15 bits of capacity provably up to 4.5 dB, and numerical evidence suggests a similar gap for all SNRs.
We propose a new coding scheme using only one lattice that achieves the $frac{1}{2}log(1+SNR)$ capacity of the additive white Gaussian noise (AWGN) channel with lattice decoding, when the signal-to-noise ratio $SNR>e-1$. The scheme applies a discrete Gaussian distribution over an AWGN-good lattice, but otherwise does not require a shaping lattice or dither. Thus, it significantly simplifies the default lattice coding scheme of Erez and Zamir which involves a quantization-good lattice as well as an AWGN-good lattice. Using the flatness factor, we show that the error probability of the proposed scheme under minimum mean-square error (MMSE) lattice decoding is almost the same as that of Erez and Zamir, for any rate up to the AWGN channel capacity. We introduce the notion of good constellations, which carry almost the same mutual information as that of continuous Gaussian inputs. We also address the implementation of Gaussian shaping for the proposed lattice Gaussian coding scheme.
The channel law for amplitude-modulated solitons transmitted through a nonlinear optical fibre with ideal distributed amplification and a receiver based on the nonlinear Fourier transform is a noncentral chi-distribution with $2n$ degrees of freedom, where $n=2$ and $n=3$ correspond to the single- and dual-polarisation cases, respectively. In this paper, we study capacity lower bounds of this channel under an average power constraint in bits per channel use. We develop an asymptotic semi-analytic approximation for a capacity lower bound for arbitrary $n$ and a Rayleigh input distribution. It is shown that this lower bound grows logarithmically with signal-to-noise ratio (SNR), independently of the value of $n$. Numerical results for other continuous input distributions are also provided. A half-Gaussian input distribution is shown to give larger rates than a Rayleigh input distribution for $n=1,2,3$. At an SNR of $25$ dB, the best lower bounds we developed are approximately $3.68$ bit per channel use. The practically relevant case of amplitude shift-keying (ASK) constellations is also numerically analysed. For the same SNR of $25$ dB, a $16$-ASK constellation yields a rate of approximately $3.45$ bit per channel use.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا