ترغب بنشر مسار تعليمي؟ اضغط هنا

On the Fragile Rates of Linear Feedback Coding Schemes of Gaussian Channels with Memory

111   0   0.0 ( 0 )
 نشر من قبل Themistoklis Charalambous
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In cite{butman1976} the linear coding scheme is applied, $X_t =g_tBig(Theta - {bf E}Big{ThetaBig|Y^{t-1}, V_0=v_0Big}Big)$, $t=2,ldots,n$, $X_1=g_1Theta$, with $Theta: Omega to {mathbb R}$, a Gaussian random variable, to derive a lower bound on the feedback rate, for additive Gaussian noise (AGN) channels, $Y_t=X_t+V_t, t=1, ldots, n$, where $V_t$ is a Gaussian autoregressive (AR) noise, and $kappa in [0,infty)$ is the total transmitter power. For the unit memory AR noise, with parameters $(c, K_W)$, where $cin [-1,1]$ is the pole and $K_W$ is the variance of the Gaussian noise, the lower bound is $C^{L,B} =frac{1}{2} log chi^2$, where $chi =lim_{nlongrightarrow infty} chi_n$ is the positive root of $chi^2=1+Big(1+ frac{|c|}{chi}Big)^2 frac{kappa}{K_W}$, and the sequence $chi_n triangleq Big|frac{g_n}{g_{n-1}}Big|, n=2, 3, ldots,$ satisfies a certain recursion, and conjectured that $C^{L,B}$ is the feedback capacity. In this correspondence, it is observed that the nontrivial lower bound $C^{L,B}=frac{1}{2} log chi^2$ such that $chi >1$, necessarily implies the scaling coefficients of the feedback code, $g_n$, $n=1,2, ldots$, grow unbounded, in the sense that, $lim_{nlongrightarrowinfty}|g_n| =+infty$. The unbounded behaviour of $g_n$ follows from the ratio limit theorem of a sequence of real numbers, and it is verified by simulations. It is then concluded that such linear codes are not practical, and fragile with respect to a mismatch between the statistics of the mathematical model of the channel and the real statistics of the channel. In particular, if the error is perturbed by $epsilon_n>0$ no matter how small, then $X_n =g_tBig(Theta - {bf E}Big{ThetaBig|Y^{t-1}, V_0=v_0Big}Big)+g_n epsilon_n$, and $|g_n|epsilon_n longrightarrow infty$, as $n longrightarrow infty$.

قيم البحث

اقرأ أيضاً

The error exponent of Markov channels with feedback is studied in the variable-length block-coding setting. Burnashevs classic result is extended and a single letter characterization for the reliability function of finite-state Markov channels is pre sented, under the assumption that the channel state is causally observed both at the transmitter and at the receiver side. Tools from stochastic control theory are used in order to treat channels with intersymbol interference. In particular the convex analytical approach to Markov decision processes is adopted to handle problems with stopping time horizons arising from variable-length coding schemes.
A rateless code-i.e., a rate-compatible family of codes-has the property that codewords of the higher rate codes are prefixes of those of the lower rate ones. A perfect family of such codes is one in which each of the codes in the family is capacity- achieving. We show by construction that perfect rateless codes with low-complexity decoding algorithms exist for additive white Gaussian noise channels. Our construction involves the use of layered encoding and successive decoding, together with repetition using time-varying layer weights. As an illustration of our framework, we design a practical three-rate code family. We further construct rich sets of near-perfect rateless codes within our architecture that require either significantly fewer layers or lower complexity than their perfect counterparts. Variations of the basic construction are also developed, including one for time-varying channels in which there is no a priori stochastic model.
The two-receiver broadcast packet erasure channel with feedback and memory is studied. Memory is modeled using a finite-state Markov chain representing a channel state. Two scenarios are considered: (i) when the transmitter has causal knowledge of th e channel state (i.e., the state is visible), and (ii) when the channel state is unknown at the transmitter, but observations of it are available at the transmitter through feedback (i.e., the state is hidden). In both scenarios, matching outer and inner bounds on the rates of communication are derived and the capacity region is determined. It is shown that similar results carry over to channels with memory and delayed feedback and memoryless compound channels with feedback. When the state is visible, the capacity region has a single-letter characterization and is in terms of a linear program. Two optimal coding schemes are devised that use feedback to keep track of the sent/received packets via a network of queues: a probabilistic scheme and a deterministic backpressure-like algorithm. The former bases its decisions solely on the past channel state information and the latter follows a max-weight queue-based policy. The performance of the algorithms are analyzed using the frameworks of rate stability in networks of queues, max-flow min-cut duality in networks, and finite-horizon Lyapunov drift analysis. When the state is hidden, the capacity region does not have a single-letter characterization and is, in this sense, uncomputable. Approximations of the capacity region are provided and two optimal coding algorithms are outlined. The first algorithm is a probabilistic coding scheme that bases its decisions on the past L acknowledgments and its achievable rate region approaches the capacity region exponentially fast in L. The second algorithm is a backpressure-like algorithm that performs optimally in the long run.
The two-receiver broadcast packet erasure channel with feedback and memory is studied. Memory is modeled using a finite-state Markov chain representing a channel state. The channel state is unknown at the transmitter, but observations of this hidden Markov chain are available at the transmitter through feedback. Matching outer and inner bounds are derived and the capacity region is determined. The capacity region does not have a single-letter characterization and is, in this sense, uncomputable. Approximations of the capacity region are provided and two optimal coding algorithms are outlined. The first algorithm is a probabilistic coding scheme that bases its decisions on the past L feedback sequences. Its achievable rate-region approaches the capacity region exponentially fast in L. The second algorithm is a backpressure-like algorithm that performs optimally in the long run.
The feedback sum-rate capacity is established for the symmetric $J$-user Gaussian multiple-access channel (GMAC). The main contribution is a converse bound that combines the dependence-balance argument of Hekstra and Willems (1989) with a variant of the factorization of a convex envelope of Geng and Nair (2014). The converse bound matches the achievable sum-rate of the Fourier-Modulated Estimate Correction strategy of Kramer (2002).
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا