ترغب بنشر مسار تعليمي؟ اضغط هنا

A Single-Letter Upper Bound to the Mismatch Capacity

78   0   0.0 ( 0 )
 نشر من قبل Ehsan Asadi Kangarshahi
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We derive a single-letter upper bound to the mismatched-decoding capacity for discrete memoryless channels. The bound is expressed as the mutual information of a transformation of the channel, such that a maximum-likelihood decoding error on the translated channel implies a mismatched-decoding error in the original channel. In particular, a strong converse is shown to hold for this upper-bound: if the rate exceeds the upper-bound, the probability of error tends to 1 exponentially when the block-length tends to infinity. We also show that the underlying optimization problem is a convex-concave problem and that an efficient iterative algorithm converges to the optimal solution. In addition, we show that, unlike achievable rates in the literature, the multiletter version of the bound does not improve. A number of examples are discussed throughout the paper.



قيم البحث

اقرأ أيضاً

The problem of network function computation over a directed acyclic network is investigated in this paper. In such a network, a sink node desires to compute with zero error a {em target function}, of which the inputs are generated at multiple source nodes. The edges in the network are assumed to be error-free and have limited capacity. The nodes in the network are assumed to have unbounded computing capability and be able to perform network coding. The {em computing rate} of a network code that can compute the target function over the network is the average number of times that the target function is computed with zero error for one use of the network. In this paper, we obtain an improved upper bound on the computing capacity, which is applicable to arbitrary target functions and arbitrary network topologies. This improved upper bound not only is an enhancement of the previous upper bounds but also is the first tight upper bound on the computing capacity for computing an arithmetic sum over a certain non-tree network, which has been widely studied in the literature. We also introduce a multi-dimensional array approach that facilitates evaluation of the improved upper bound. Furthermore, we apply this bound to the problem of computing a vector-linear function over a network. With this bound, we are able to not only enhance a previous result on computing a vector-linear function over a network but also simplify the proof significantly. Finally, we prove that for computing the binary maximum function over the reverse butterfly network, our improved upper bound is not achievable. This result establishes that in general our improved upper bound is non achievable, but whether it is asymptotically achievable or not remains open.
A correlated phase-and-additive-noise (CPAN) mismatched model is developed for wavelength division multiplexing over optical fiber channels governed by the nonlinear Schrodinger equation. Both the phase and additive noise processes of the CPAN model are Gauss-Markov whereas previous work uses Wiener phase noise and white additive noise. Second order statistics are derived and lower bounds on the capacity are computed by simulations. The CPAN model characterizes nonlinearities better than existing models in the sense that it achieves better information rates. For example, the model gains 0.35 dB in power at the peak data rate when using a single carrier per wavelength. For multiple carriers per wavelength, the model combined with frequency-dependent power allocation gains 0.14 bits/s/Hz in rate and 0.8 dB in power at the peak data rate.
This paper studies an $n$-dimensional additive Gaussian noise channel with a peak-power-constrained input. It is well known that, in this case, when $n=1$ the capacity-achieving input distribution is discrete with finitely many mass points, and whe n $n>1$ the capacity-achieving input distribution is supported on finitely many concentric shells. However, due to the previous proof technique, neither the exact number of mass points/shells of the optimal input distribution nor a bound on it was available. This paper provides an alternative proof of the finiteness of the number mass points/shells of the capacity-achieving input distribution and produces the first firm bounds on the number of mass points and shells, paving an alternative way for approaching many such problems. Roughly, the paper consists of three parts. The first part considers the case of $n=1$. The first result, in this part, shows that the number of mass points in the capacity-achieving input distribution is within a factor of two from the downward shifted capacity-achieving output probability density function (pdf). The second result, by showing a bound on the number of zeros of the downward shifted capacity-achieving output pdf, provides a first firm upper on the number of mass points. Specifically, it is shown that the number of mass points is given by $O(mathsf{A}^2)$ where $mathsf{A}$ is the constraint on the input amplitude. The second part generalizes the results of the first part to the case of $n>1$. In particular, for every dimension $n>1$, it is shown that the number of shells is given by $O(mathsf{A}^2)$ where $mathsf{A}$ is the constraint on the input amplitude. Finally, the third part provides bounds on the number of points for the case of $n=1$ with an additional power constraint.
Regular perturbation is applied to the Manakov equation and motivates a generalized correlated phase-and-additive noise model for wavelength-division multiplexing over dual-polarization optical fiber channels. The model includes three hidden Gauss-Ma rkov processes: phase noise, polarization rotation, and additive noise. Particle filtering is used to compute lower bounds on the capacity of multi-carrier communication with frequency-dependent powers and delays. A gain of 0.17 bits/s/Hz/pol in spectral efficiency or 0.8 dB in power efficiency is achieved with respect to existing models at their peak data rate. Frequency-dependent delays also increase the spectral efficiency of single-polarization channels.
123 - Tal Philosof , Ram Zamir 2008
For general memoryless systems, the typical information theoretic solution - when exists - has a single-letter form. This reflects the fact that optimum performance can be approached by a random code (or a random binning scheme), generated using inde pendent and identically distributed copies of some single-letter distribution. Is that the form of the solution of any (information theoretic) problem? In fact, some counter examples are known. The most famous is the two help one problem: Korner and Marton showed that if we want to decode the modulo-two sum of two binary sources from their independent encodings, then linear coding is better than random coding. In this paper we provide another counter example, the doubly-dirty multiple access channel (MAC). Like the Korner-Marton problem, this is a multi-terminal scenario where side information is distributed among several terminals; each transmitter knows part of the channel interference but the receiver is not aware of any part of it. We give an explicit solution for the capacity region of a binary version of the doubly-dirty MAC, demonstrate how the capacity region can be approached using a linear coding scheme, and prove that the best known single-letter region is strictly contained in it. We also state a conjecture regarding a similar rate loss of single letter characterization in the Gaussian case.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا