ترغب بنشر مسار تعليمي؟ اضغط هنا

Joint Coordination-Channel Coding for Strong Coordination over Noisy Channels Based on Polar Codes

99   0   0.0 ( 0 )
 نشر من قبل Sarah Obead
 تاريخ النشر 2017
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We construct a joint coordination-channel polar coding scheme for strong coordination of actions between two agents $mathsf X$ and $mathsf Y$, which communicate over a discrete memoryless channel (DMC) such that the joint distribution of actions follows a prescribed probability distribution. We show that polar codes are able to achieve our previously established inner bound to the strong noisy coordination capacity region and thus provide a constructive alternative to a random coding proof. Our polar coding scheme also offers a constructive solution to a channel simulation problem where a DMC and shared randomness are together employed to simulate another DMC. In particular, our proposed solution is able to utilize the randomness of the DMC to reduce the amount of local randomness required to generate the sequence of actions at agent $mathsf Y$. By leveraging our earlier random coding results for this problem, we conclude that the proposed joint coordination-channel coding scheme strictly outperforms a separate scheme in terms of achievable communication rate for the same amount of injected randomness into both systems.

قيم البحث

اقرأ أيضاً

We study the problem of strong coordination of the actions of two nodes $X$ and $Y$ that communicate over a discrete memoryless channel (DMC) such that the actions follow a prescribed joint probability distribution. We propose two novel random coding schemes and a polar coding scheme for this noisy strong coordination problem, and derive inner bounds for the respective strong coordination capacity region. The first scheme is a joint coordination-channel coding scheme that utilizes the randomness provided by the DMC to reduce the amount of local randomness required to generate the sequence of actions at Node $Y$. Based on this random coding scheme, we provide a characterization of the capacity region for two special cases of the noisy strong coordination setup, namely, when the actions at Node $Y$ are determined by Node $X$ and when the DMC is a deterministic channel. The second scheme exploits separate coordination and channel coding where local randomness is extracted from the channel after decoding. The third scheme is a joint coordination-channel polar coding scheme for strong coordination. We show that polar codes are able to achieve the established inner bound to the noisy strong coordination capacity region and thus provide a constructive alternative to a random coding proof. Our polar coding scheme also offers a constructive solution to a channel simulation problem where a DMC and shared randomness are employed together to simulate another DMC. Finally, by leveraging the random coding results for this problem, we present an example in which the proposed joint scheme is able to strictly outperform the separate scheme in terms of achievable communication rate for the same amount of injected randomness into both systems. Thus, we establish the sub-optimality of the separation of strong coordination and channel coding with respect to the communication rate over the DMC.
We study the problem of strong coordination of actions of two agents $X$ and $Y$ that communicate over a noisy communication channel such that the actions follow a given joint probability distribution. We propose two novel schemes for this noisy stro ng coordination problem, and derive inner bounds for the underlying strong coordination capacity region. The first scheme is a joint coordination-channel coding scheme that utilizes the randomness provided by the communication channel to reduce the local randomness required in generating the action sequence at agent $Y$. The second scheme exploits separate coordination and channel coding where local randomness is extracted from the channel after decoding. Finally, we present an example in which the joint scheme is able to outperform the separate scheme in terms of coordination rate.
We exploit the redundancy of the language-based source to help polar decoding. By judging the validity of decoded words in the decoded sequence with the help of a dictionary, the polar list decoder constantly detects erroneous paths after every few b its are decoded. This path-pruning technique based on joint decoding has advantages over stand-alone polar list decoding in that most decoding errors in early stages are corrected. In order to facilitate the joint decoding, we first propose a construction of dynamic dictionary using a trie and show an efficient way to trace the dictionary during decoding. Then we propose a joint decoding scheme of polar codes taking into account both information from the channel and the source. The proposed scheme has the same decoding complexity as the list decoding of polar codes. A list-size adaptive joint decoding is further implemented to largely reduce the decoding complexity. We conclude by simulation that the joint decoding schemes outperform stand-alone polar codes with CRC-aided successive cancellation list decoding by over 0.6 dB.
Polar codes are introduced for discrete memoryless broadcast channels. For $m$-user deterministic broadcast channels, polarization is applied to map uniformly random message bits from $m$ independent messages to one codeword while satisfying broadcas t constraints. The polarization-based codes achieve rates on the boundary of the private-message capacity region. For two-user noisy broadcast channels, polar implementations are presented for two information-theoretic schemes: i) Covers superposition codes; ii) Martons codes. Due to the structure of polarization, constraints on the auxiliary and channel-input distributions are identified to ensure proper alignment of polarization indices in the multi-user setting. The codes achieve rates on the capacity boundary of a few classes of broadcast channels (e.g., binary-input stochastically degraded). The complexity of encoding and decoding is $O(n*log n)$ where $n$ is the block length. In addition, polar code sequences obtain a stretched-exponential decay of $O(2^{-n^{beta}})$ of the average block error probability where $0 < beta < 0.5$.
Because of its high data density and longevity, DNA is emerging as a promising candidate for satisfying increasing data storage needs. Compared to conventional storage media, however, data stored in DNA is subject to a wider range of errors resulting from various processes involved in the data storage pipeline. In this paper, we consider correcting duplication errors for both exact and noisy tandem duplications of a given length k. An exact duplication inserts a copy of a substring of length k of the sequence immediately after that substring, e.g., ACGT to ACGACGT, where k = 3, while a noisy duplication inserts a copy suffering from substitution noise, e.g., ACGT to ACGATGT. Specifically, we design codes that can correct any number of exact duplication and one noisy duplication errors, where in the noisy duplication case the copy is at Hamming distance 1 from the original. Our constructions rely upon recovering the duplication root of the stored codeword. We characterize the ways in which duplication errors manifest in the root of affected sequences and design efficient codes for correcting these error patterns. We show that the proposed construction is asymptotically optimal, in the sense that it has the same asymptotic rate as optimal codes correcting exact duplications only.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا