In cite{butman1976} the linear coding scheme is applied, $X_t =g_tBig(Theta - {bf E}Big{ThetaBig|Y^{t-1}, V_0=v_0Big}Big)$, $t=2,ldots,n$, $X_1=g_1Theta$, with $Theta: Omega to {mathbb R}$, a Gaussian random variable, to derive a lower bound on the feedback rate, for additive Gaussian noise (AGN) channels, $Y_t=X_t+V_t, t=1, ldots, n$, where $V_t$ is a Gaussian autoregressive (AR) noise, and $kappa in [0,infty)$ is the total transmitter power. For the unit memory AR noise, with parameters $(c, K_W)$, where $cin [-1,1]$ is the pole and $K_W$ is the variance of the Gaussian noise, the lower bound is $C^{L,B} =frac{1}{2} log chi^2$, where $chi =lim_{nlongrightarrow infty} chi_n$ is the positive root of $chi^2=1+Big(1+ frac{|c|}{chi}Big)^2 frac{kappa}{K_W}$, and the sequence $chi_n triangleq Big|frac{g_n}{g_{n-1}}Big|, n=2, 3, ldots,$ satisfies a certain recursion, and conjectured that $C^{L,B}$ is the feedback capacity. In this correspondence, it is observed that the nontrivial lower bound $C^{L,B}=frac{1}{2} log chi^2$ such that $chi >1$, necessarily implies the scaling coefficients of the feedback code, $g_n$, $n=1,2, ldots$, grow unbounded, in the sense that, $lim_{nlongrightarrowinfty}|g_n| =+infty$. The unbounded behaviour of $g_n$ follows from the ratio limit theorem of a sequence of real numbers, and it is verified by simulations. It is then concluded that such linear codes are not practical, and fragile with respect to a mismatch between the statistics of the mathematical model of the channel and the real statistics of the channel. In particular, if the error is perturbed by $epsilon_n>0$ no matter how small, then $X_n =g_tBig(Theta - {bf E}Big{ThetaBig|Y^{t-1}, V_0=v_0Big}Big)+g_n epsilon_n$, and $|g_n|epsilon_n longrightarrow infty$, as $n longrightarrow infty$.
A lower bound on the maximum likelihood (ML) decoding error exponent of linear block code ensembles, on the erasure channel, is developed. The lower bound turns to be positive, over an ensemble specific interval of erasure probabilities, when the ensemble weight spectral shape function tends to a negative value as the fractional codeword weight tends to zero. For these ensembles we can therefore lower bound the block-wise ML decoding threshold. Two examples are presented, namely, linear random parity-check codes and fixed-rate Raptor codes with linear random precoders. While for the former a full analytical solution is possible, for the latter we can lower bound the ML decoding threshold on the erasure channel by simply solving a 2 x 2 system of nonlinear equations.
The zero-error feedback capacity of the Gelfand-Pinsker channel is established. It can be positive even if the channels zero-error capacity is zero in the absence of feedback. Moreover, the error-free transmission of a single bit may require more than one channel use. These phenomena do not occur when the state is revealed to the transmitter causally, a case that is solved here using Shannon strategies. Cost constraints on the channel inputs or channel states are also discussed, as is the scenario where---in addition to the message---also the state sequence must be recovered.
This paper describes a new set of block source codes well suited for data compression. These codes are defined by sets of productions rules of the form a.l->b, where a in A represents a value from the source alphabet A and l, b are -small- sequences of bits. These codes naturally encompass other Variable Length Codes (VLCs) such as Huffman codes. It is shown that these codes may have a similar or even a shorter mean description length than Huffman codes for the same encoding and decoding complexity. A first code design method allowing to preserve the lexicographic order in the bit domain is described. The corresponding codes have the same mean description length (mdl) as Huffman codes from which they are constructed. Therefore, they outperform from a compression point of view the Hu-Tucker codes designed to offer the lexicographic property in the bit domain. A second construction method allows to obtain codes such that the marginal bit probability converges to 0.5 as the sequence length increases and this is achieved even if the probability distribution function is not known by the encoder.