Do you want to publish a course? Click here

The Dirty MIMO Multiple-Access Channel

70   0   0.0 ( 0 )
 Added by Anatoly Khina
 Publication date 2015
and research's language is English




Ask ChatGPT about the research

In the scalar dirty multiple-access channel, in addition to Gaussian noise, two additive interference signals are present, each known non-causally to a single transmitter. It was shown by Philosof et al. that for strong interferences, an i.i.d. ensemble of codes does not achieve the capacity region. Rather, a structured-codes approach was presented, that was shown to be optimal in the limit of high signal-to-noise ratios, where the sum-capacity is dictated by the minimal (bottleneck) channel gain. In this paper, we consider the multiple-input multiple-output (MIMO) variant of this setting. In order to incorporate structured codes in this case, one can utilize matrix decompositions that transform the channel into effective parallel scalar dirty multiple-access channels. This approach however suffers from a bottleneck effect for each effective scalar channel and therefore the achievable rates strongly depend on the chosen decomposition. It is shown that a recently proposed decomposition, where the diagonals of the effective channel matrices are equal up to a scaling factor, is optimal at high signal-to-noise ratios, under an equal rank assumption. This approach is then extended to any number of transmitters. Finally, an application to physical-layer network coding for the MIMO two-way relay channel is presented.



rate research

Read More

123 - Tal Philosof , Ram Zamir 2008
For general memoryless systems, the typical information theoretic solution - when exists - has a single-letter form. This reflects the fact that optimum performance can be approached by a random code (or a random binning scheme), generated using independent and identically distributed copies of some single-letter distribution. Is that the form of the solution of any (information theoretic) problem? In fact, some counter examples are known. The most famous is the two help one problem: Korner and Marton showed that if we want to decode the modulo-two sum of two binary sources from their independent encodings, then linear coding is better than random coding. In this paper we provide another counter example, the doubly-dirty multiple access channel (MAC). Like the Korner-Marton problem, this is a multi-terminal scenario where side information is distributed among several terminals; each transmitter knows part of the channel interference but the receiver is not aware of any part of it. We give an explicit solution for the capacity region of a binary version of the doubly-dirty MAC, demonstrate how the capacity region can be approached using a linear coding scheme, and prove that the best known single-letter region is strictly contained in it. We also state a conjecture regarding a similar rate loss of single letter characterization in the Gaussian case.
In this paper we introduce the two-user asynchronous cognitive multiple access channel (ACMAC). This channel model includes two transmitters, an uninformed one, and an informed one which knows prior to the beginning of a transmission the message which the uninformed transmitter is about to send. We assume that the channel from the uninformed transmitter to the receiver suffers a fixed but unknown delay. We further introduce a modified model, referred to as the asynchronous codeword cognitive multiple access channel (ACC-MAC), which differs from the ACMAC in that the informed user knows the signal that is to be transmitted by the other user, rather than the message that it is about to transmit. We state inner and outer bounds on the ACMAC and the ACC-MAC capacity regions, and we specialize the results to the Gaussian case. Further, we characterize the capacity regions of these channels in terms of multi-letter expressions. Finally, we provide an example which instantiates the difference between message side-information and codeword side-information.
In this paper, we study the problem of secret communication over a Compound Multiple Access Channel (MAC). In this channel, we assume that one of the transmitted messages is confidential that is only decoded by its corresponding receiver and kept secret from the other receiver. For this proposed setting (compound MAC with confidential messages), we derive general inner and outer bounds on the secrecy capacity region. Also, as examples, we investigate Less noisy and Gaussia
In this work we show how an improved lower bound to the error exponent of the memoryless multiple-access (MAC) channel is attained via the use of linear codes, thus demonstrating that structure can be beneficial even in cases where there is no capacity gain. We show that if the MAC channel is modulo-additive, then any error probability, and hence any error exponent, achievable by a linear code for the corresponding single-user channel, is also achievable for the MAC channel. Specifically, for an alphabet of prime cardinality, where linear codes achieve the best known exponents in the single-user setting and the optimal exponent above the critical rate, this performance carries over to the MAC setting. At least at low rates, where expurgation is needed, our approach strictly improves performance over previous results, where expurgation was used at most for one of the users. Even when the MAC channel is not additive, it may be transformed into such a channel. While the transformation is lossy, we show that the distributed structure gain in some nearly additive cases outweighs the loss, and thus the error exponent can improve upon the best known error exponent for these cases as well. Finally we apply a similar approach to the Gaussian MAC channel. We obtain an improvement over the best known achievable exponent, given by Gallager, for certain rate pairs, using lattice codes which satisfy a nesting condition.
This paper studies the problem of secure communication over a K-transmitter multiple access channel in the presence of an external eavesdropper, subject to a joint secrecy constraint (i.e., information leakage rate from the collection of K messages to an eavesdropper is made vanishing). As a result, we establish the joint secrecy achievable rate region. To this end, our results build upon two techniques in addition to the standard information-theoretic methods. The first is a generalization of Chia-El Gamals lemma on entropy bound for a set of codewords given partial information. The second is to utilize a compact representation of a list of sets that, together with properties of mutual information, leads to an efficient Fourier-Motzkin elimination. These two approaches could also be of independent interests in other contexts.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا