No Arabic abstract
The zero-error feedback capacity of the Gelfand-Pinsker channel is established. It can be positive even if the channels zero-error capacity is zero in the absence of feedback. Moreover, the error-free transmission of a single bit may require more than one channel use. These phenomena do not occur when the state is revealed to the transmitter causally, a case that is solved here using Shannon strategies. Cost constraints on the channel inputs or channel states are also discussed, as is the scenario where---in addition to the message---also the state sequence must be recovered.
It is known that for a discrete channel with correlated additive noise, the ordinary capacity with or without feedback both equal $ log q-mathcal{H} (Z) $, where $ mathcal{H}(Z) $ is the entropy rate of the noise process $ Z $ and $ q $ is the alphabet size. In this paper, a class of finite-state additive noise channels is introduced. It is shown that the zero-error feedback capacity of such channels is either zero or $C_{0f} =log q -h (Z) $, where $ h (Z) $ is the {em topological entropy} of the noise process. A topological condition is given when the zero-error capacity is zero, with or without feedback. Moreover, the zero-error capacity without feedback is lower-bounded by $ log q-2 h (Z) $. We explicitly compute the zero-error feedback capacity for several examples, including channels with isolated errors and a Gilbert-Elliot channel.
In this paper, we propose capacity-achieving communication schemes for Gaussian finite-state Markov channels (FSMCs) subject to an average channel input power constraint, under the assumption that the transmitters can have access to delayed noiseless output feedback as well as instantaneous or delayed channel state information (CSI). We show that the proposed schemes reveals connections between feedback communication and feedback control.
We consider the problem of characterizing network capacity in the presence of adversarial errors on network links,focusing in particular on the effect of low-capacity feedback links cross network cuts.
The utility of limited feedback for coding over an individual sequence of DMCs is investigated. This study complements recent results showing how limited or noisy feedback can boost the reliability of communication. A strategy with fixed input distribution $P$ is given that asymptotically achieves rates arbitrarily close to the mutual information induced by $P$ and the state-averaged channel. When the capacity achieving input distribution is the same over all channel states, this achieves rates at least as large as the capacity of the state averaged channel, sometimes called the empirical capacity.
This paper studies a two-user state-dependent Gaussian multiple-access channel (MAC) with state noncausally known at one encoder. Two scenarios are considered: i) each user wishes to communicate an independent message to the common receiver, and ii) the two encoders send a common message to the receiver and the non-cognitive encoder (i.e., the encoder that does not know the state) sends an independent individual message (this model is also known as the MAC with degraded message sets). For both scenarios, new outer bounds on the capacity region are derived, which improve uniformly over the best known outer bounds. In the first scenario, the two corner points of the capacity region as well as the sum rate capacity are established, and it is shown that a single-letter solution is adequate to achieve both the corner points and the sum rate capacity. Furthermore, the full capacity region is characterized in situations in which the sum rate capacity is equal to the capacity of the helper problem. The proof exploits the optimal-transportation idea of Polyanskiy and Wu (which was used previously to establish an outer bound on the capacity region of the interference channel) and the worst-case Gaussian noise result for the case in which the input and the noise are dependent.