ترغب بنشر مسار تعليمي؟ اضغط هنا

Universal Covertness for Discrete Memoryless Sources

65   0   0.0 ( 0 )
 نشر من قبل Remi Chou
 تاريخ النشر 2018
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Consider a sequence $X^n$ of length $n$ emitted by a Discrete Memoryless Source (DMS) with unknown distribution $p_X$. The objective is to construct a lossless source code that maps $X^n$ to a sequence $widehat{Y}^m$ of length $m$ that is indistinguishable, in terms of Kullback-Leibler divergence, from a sequence emitted by another DMS with known distribution $p_Y$. The main result is the existence of a coding scheme that performs this task with an optimal ratio $m/n$ equal to $H(X)/H(Y)$, the ratio of the Shannon entropies of the two distributions, as $n$ goes to infinity. The coding scheme overcomes the challenges created by the lack of knowledge about $p_X$ by a type-based universal lossless source coding scheme that produces as output an almost uniformly distributed sequence, followed by another type-based coding scheme that jointly performs source resolvability and universal lossless source coding. The result recovers and extends previous results that either assume $p_X$ or $p_Y$ uniform, or $p_X$ known. The price paid for these generalizations is the use of common randomness with vanishing rate, whose length scales as the logarithm of $n$. By allowing common randomness larger than the logarithm of $n$ but still negligible compared to $n$, a constructive low-complexity encoding and decoding counterpart to the main result is also provided for binary sources by means of polar codes.



قيم البحث

اقرأ أيضاً

We show that the Extrinsic Information about the coded bits of any good (capacity achieving) code operating over a wide class of discrete memoryless channels (DMC) is zero when channel capacity is below the code rate and positive constant otherwise, that is, the Extrinsic Information Transfer (EXIT) chart is a step function of channel quality, for any capacity achieving code. It follows that, for a common class of iterative receivers where the error correcting decoder must operate at first iteration at rate above capacity (such as in turbo equalization, turbo channel estimation, parallel and serial concatenated coding and the like), classical good codes which achieve capacity over the DMC are not effective and should be replaced by different new ones. Another meaning of the results is that a good code operating at rate above channel capacity falls apart into its individual transmitted symbols in the sense that all the information about a coded transmitted symbol is contained in the corresponding received symbol and no information about it can be inferred from the other received symbols. The binary input additive white Gaussian noise channel is treated in part 1 of this report. Part 2 extends the results to the symmetric binary channel and to the binary erasure channel and provides an heuristic extension to wider class of channel models.
We develop a low-complexity coding scheme to achieve covert communications over binary-input discrete memoryless channels (BI-DMCs). We circumvent the impossibility of covert communication with linear codes by introducing non-linearity through the us e of pulse position modulation (PPM) and multilevel coding (MLC). We show that the MLC-PPM scheme exhibits many appealing properties; in particular, the channel at a given index level remains stationary as the number of level increases, which allows one to use families of channel capacity- and channel resolvability-achieving codes to concretely instantiate the covert communication scheme.
We derive a lower and upper bound on the reliability function of discrete memoryless multiple-access channel (MAC) with noiseless feedback and variable-length codes (VLCs). For the upper-bound, we use proof techniques of Burnashev for the point-to-po int case. Also, we adopt the techniques used to prove the converse for the feedback-capacity of MAC. For the lower-bound on the error exponent, we present a coding scheme consisting of a data and a confirmation stage. In the data stage, any arbitrary feedback capacity-achieving code is used. In the confirmation stage, each transmitter sends one bit of information to the receiver using a pair of codebooks of size two, one for each transmitter. The codewords at this stage are selected randomly according to an appropriately optimized joint probability distribution. The bounds increase linearly with respect to a specific Euclidean distance measure defined between the transmission rate pair and the capacity boundary. The lower and upper bounds match for a class of MACs.
A memoryless state-dependent broadcast channel (BC) is considered, where the transmitter wishes to convey two private messages to two receivers while simultaneously estimating the respective states via generalized feedback. The model at hand is motiv ated by a joint radar and communication system where radar and data applications share the same frequency band. For physically degraded BCs with i.i.d. state sequences, we characterize the capacity-distortion tradeoff region. For general BCs, we provide inner and outer bounds on the capacitydistortion region, as well as a sufficient condition when it is equal to the product of the capacity region and the set of achievable distortion. Interestingly, the proposed synergetic design significantly outperforms a conventional approach that splits the resource either for sensing or communication.
126 - Erdal Arikan 2009
A method is proposed, called channel polarization, to construct code sequences that achieve the symmetric capacity $I(W)$ of any given binary-input discrete memoryless channel (B-DMC) $W$. The symmetric capacity is the highest rate achievable subject to using the input letters of the channel with equal probability. Channel polarization refers to the fact that it is possible to synthesize, out of $N$ independent copies of a given B-DMC $W$, a second set of $N$ binary-input channels ${W_N^{(i)}:1le ile N}$ such that, as $N$ becomes large, the fraction of indices $i$ for which $I(W_N^{(i)})$ is near 1 approaches $I(W)$ and the fraction for which $I(W_N^{(i)})$ is near 0 approaches $1-I(W)$. The polarized channels ${W_N^{(i)}}$ are well-conditioned for channel coding: one need only send data at rate 1 through those with capacity near 1 and at rate 0 through the remaining. Codes constructed on the basis of this idea are called polar codes. The paper proves that, given any B-DMC $W$ with $I(W)>0$ and any target rate $R < I(W)$, there exists a sequence of polar codes ${{mathscr C}_n;nge 1}$ such that ${mathscr C}_n$ has block-length $N=2^n$, rate $ge R$, and probability of block error under successive cancellation decoding bounded as $P_{e}(N,R) le bigoh(N^{-frac14})$ independently of the code rate. This performance is achievable by encoders and decoders with complexity $O(Nlog N)$ for each.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا