Do you want to publish a course? Click here

On Multi-Channel Huffman Codes for Asymmetric-Alphabet Channels

74   0   0.0 ( 0 )
 Added by Hoover H. F. Yin
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

Zero-error single-channel source coding has been studied extensively over the past decades. Its natural multi-channel generalization is however not well investigated. While the special case with multiple symmetric-alphabet channels was studied a decade ago, codes in such setting have no advantage over single-channel codes in data compression, making them worthless in most applications. With essentially no development since the last decade, in this paper, we break the stalemate by showing that it is possible to beat single-channel source codes in terms of compression assuming asymmetric-alphabet channels. We present the multi-channel analog of several classical results in single-channel source coding, such as that a multi-channel Huffman code is an optimal tree-decodable code. We also show some evidences that finding an efficient construction of multi-channel Huffman codes may be hard. Nevertheless, we propose a suboptimal code construction whose redundancy is guaranteed to be no larger than that of an optimal single-channel source code.



rate research

Read More

155 - Michael B. Baer 2007
This paper presents new lower and upper bounds for the compression rate of binary prefix codes optimized over memoryless sources according to various nonlinear codeword length objectives. Like the most well-known redundancy bounds for minimum average redundancy coding - Huffman coding - these are in terms of a form of entropy and/or the probability of an input symbol, often the most probable one. The bounds here, some of which are tight, improve on known bounds of the form L in [H,H+1), where H is some form of entropy in bits (or, in the case of redundancy objectives, 0) and L is the length objective, also in bits. The objectives explored here include exponential-average length, maximum pointwise redundancy, and exponential-average pointwise redundancy (also called dth exponential redundancy). The first of these relates to various problems involving queueing, uncertainty, and lossless communications; the second relates to problems involving Shannon coding and universal modeling. For these two objectives we also explore the related problem of the necessary and sufficient conditions for the shortest codeword of a code being a specific length.
We present new constructions of codes for asymmetric channels for both binary and nonbinary alphabets, based on methods of generalized code concatenation. For the binary asymmetric channel, our methods construct nonlinear single-error-correcting codes from ternary outer codes. We show that some of the Varshamov-Tenengolts-Constantin-Rao codes, a class of binary nonlinear codes for this channel, have a nice structure when viewed as ternary codes. In many cases, our ternary construction yields even better codes. For the nonbinary asymmetric channel, our methods construct linear codes for many lengths and distances which are superior to the linear codes of the same length capable of correcting the same number of symmetric errors. In the binary case, Varshamov has shown that almost all good linear codes for the asymmetric channel are also good for the symmetric channel. Our results indicate that Varshamovs argument does not extend to the nonbinary case, i.e., one can find better linear codes for asymmetric channels than for symmetric ones.
We construct a joint coordination-channel polar coding scheme for strong coordination of actions between two agents $mathsf X$ and $mathsf Y$, which communicate over a discrete memoryless channel (DMC) such that the joint distribution of actions follows a prescribed probability distribution. We show that polar codes are able to achieve our previously established inner bound to the strong noisy coordination capacity region and thus provide a constructive alternative to a random coding proof. Our polar coding scheme also offers a constructive solution to a channel simulation problem where a DMC and shared randomness are together employed to simulate another DMC. In particular, our proposed solution is able to utilize the randomness of the DMC to reduce the amount of local randomness required to generate the sequence of actions at agent $mathsf Y$. By leveraging our earlier random coding results for this problem, we conclude that the proposed joint coordination-channel coding scheme strictly outperforms a separate scheme in terms of achievable communication rate for the same amount of injected randomness into both systems.
In this paper, we focus on the two-user Gaussian interference channel (GIC), and study the Han-Kobayashi (HK) coding/decoding strategy with the objective of designing low-density parity-check (LDPC) codes. A code optimization algorithm is proposed which adopts a random perturbation technique via tracking the average mutual information. The degree distribution optimization and convergence threshold computation are carried out for strong and weak interference channels, employing binary phase-shift keying (BPSK). Under strong interference, it is observed that optimized codes operate close to the capacity boundary. For the case of weak interference, it is shown that via the newly designed codes, a nontrivial rate pair is achievable, which is not attainable by single user codes with time-sharing. Performance of the designed LDPC codes are also studied for finite block lengths through simulations of specific codes picked from the optimized degree distributions.
Polar codes are introduced for discrete memoryless broadcast channels. For $m$-user deterministic broadcast channels, polarization is applied to map uniformly random message bits from $m$ independent messages to one codeword while satisfying broadcast constraints. The polarization-based codes achieve rates on the boundary of the private-message capacity region. For two-user noisy broadcast channels, polar implementations are presented for two information-theoretic schemes: i) Covers superposition codes; ii) Martons codes. Due to the structure of polarization, constraints on the auxiliary and channel-input distributions are identified to ensure proper alignment of polarization indices in the multi-user setting. The codes achieve rates on the capacity boundary of a few classes of broadcast channels (e.g., binary-input stochastically degraded). The complexity of encoding and decoding is $O(n*log n)$ where $n$ is the block length. In addition, polar code sequences obtain a stretched-exponential decay of $O(2^{-n^{beta}})$ of the average block error probability where $0 < beta < 0.5$.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا