No Arabic abstract
We study a two-user state-dependent generalized multiple-access channel (GMAC) with correlated states. It is assumed that each encoder has emph{noncausal} access to channel state information (CSI). We develop an achievable rate region by employing rate-splitting, block Markov encoding, Gelfand--Pinsker multicoding, superposition coding and joint typicality decoding. In the proposed scheme, the encoders use a partial decoding strategy to collaborate in the next block, and the receiver uses a backward decoding strategy with joint unique decoding at each stage. Our achievable rate region includes several previously known regions proposed in the literature for different scenarios of multiple-access and relay channels. Then, we consider two Gaussian GMACs with additive interference. In the first model, we assume that the interference is known noncausally at both of the encoders and construct a multi-layer Costa precoding scheme that removes emph{completely} the effect of the interference. In the second model, we consider a doubly dirty Gaussian GMAC in which each of interferences is known noncausally only at one encoder. We derive an inner bound and analyze the achievable rate region for the latter model and interestingly prove that if one of the encoders knows the full CSI, there exists an achievable rate region which is emph{independent} of the power of interference.
In the scalar dirty multiple-access channel, in addition to Gaussian noise, two additive interference signals are present, each known non-causally to a single transmitter. It was shown by Philosof et al. that for strong interferences, an i.i.d. ensemble of codes does not achieve the capacity region. Rather, a structured-codes approach was presented, that was shown to be optimal in the limit of high signal-to-noise ratios, where the sum-capacity is dictated by the minimal (bottleneck) channel gain. In this paper, we consider the multiple-input multiple-output (MIMO) variant of this setting. In order to incorporate structured codes in this case, one can utilize matrix decompositions that transform the channel into effective parallel scalar dirty multiple-access channels. This approach however suffers from a bottleneck effect for each effective scalar channel and therefore the achievable rates strongly depend on the chosen decomposition. It is shown that a recently proposed decomposition, where the diagonals of the effective channel matrices are equal up to a scaling factor, is optimal at high signal-to-noise ratios, under an equal rank assumption. This approach is then extended to any number of transmitters. Finally, an application to physical-layer network coding for the MIMO two-way relay channel is presented.
In this work we show how an improved lower bound to the error exponent of the memoryless multiple-access (MAC) channel is attained via the use of linear codes, thus demonstrating that structure can be beneficial even in cases where there is no capacity gain. We show that if the MAC channel is modulo-additive, then any error probability, and hence any error exponent, achievable by a linear code for the corresponding single-user channel, is also achievable for the MAC channel. Specifically, for an alphabet of prime cardinality, where linear codes achieve the best known exponents in the single-user setting and the optimal exponent above the critical rate, this performance carries over to the MAC setting. At least at low rates, where expurgation is needed, our approach strictly improves performance over previous results, where expurgation was used at most for one of the users. Even when the MAC channel is not additive, it may be transformed into such a channel. While the transformation is lossy, we show that the distributed structure gain in some nearly additive cases outweighs the loss, and thus the error exponent can improve upon the best known error exponent for these cases as well. Finally we apply a similar approach to the Gaussian MAC channel. We obtain an improvement over the best known achievable exponent, given by Gallager, for certain rate pairs, using lattice codes which satisfy a nesting condition.
In this paper we introduce the two-user asynchronous cognitive multiple access channel (ACMAC). This channel model includes two transmitters, an uninformed one, and an informed one which knows prior to the beginning of a transmission the message which the uninformed transmitter is about to send. We assume that the channel from the uninformed transmitter to the receiver suffers a fixed but unknown delay. We further introduce a modified model, referred to as the asynchronous codeword cognitive multiple access channel (ACC-MAC), which differs from the ACMAC in that the informed user knows the signal that is to be transmitted by the other user, rather than the message that it is about to transmit. We state inner and outer bounds on the ACMAC and the ACC-MAC capacity regions, and we specialize the results to the Gaussian case. Further, we characterize the capacity regions of these channels in terms of multi-letter expressions. Finally, we provide an example which instantiates the difference between message side-information and codeword side-information.
In this paper, we study the problem of secret communication over a Compound Multiple Access Channel (MAC). In this channel, we assume that one of the transmitted messages is confidential that is only decoded by its corresponding receiver and kept secret from the other receiver. For this proposed setting (compound MAC with confidential messages), we derive general inner and outer bounds on the secrecy capacity region. Also, as examples, we investigate Less noisy and Gaussia
This paper studies the problem of secure communication over a K-transmitter multiple access channel in the presence of an external eavesdropper, subject to a joint secrecy constraint (i.e., information leakage rate from the collection of K messages to an eavesdropper is made vanishing). As a result, we establish the joint secrecy achievable rate region. To this end, our results build upon two techniques in addition to the standard information-theoretic methods. The first is a generalization of Chia-El Gamals lemma on entropy bound for a set of codewords given partial information. The second is to utilize a compact representation of a list of sets that, together with properties of mutual information, leads to an efficient Fourier-Motzkin elimination. These two approaches could also be of independent interests in other contexts.