ترغب بنشر مسار تعليمي؟ اضغط هنا

Entropy coding with Variable Length Re-writing Systems

76   0   0.0 ( 0 )
 نشر من قبل Herve Jegou
 تاريخ النشر 2005
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

This paper describes a new set of block source codes well suited for data compression. These codes are defined by sets of productions rules of the form a.l->b, where a in A represents a value from the source alphabet A and l, b are -small- sequences of bits. These codes naturally encompass other Variable Length Codes (VLCs) such as Huffman codes. It is shown that these codes may have a similar or even a shorter mean description length than Huffman codes for the same encoding and decoding complexity. A first code design method allowing to preserve the lexicographic order in the bit domain is described. The corresponding codes have the same mean description length (mdl) as Huffman codes from which they are constructed. Therefore, they outperform from a compression point of view the Hu-Tucker codes designed to offer the lexicographic property in the bit domain. A second construction method allows to obtain codes such that the marginal bit probability converges to 0.5 as the sequence length increases and this is achieved even if the probability distribution function is not known by the encoder.

قيم البحث

اقرأ أيضاً

70 - Xiaoyu Zhao , Wei Chen 2019
With the phenomenal growth of the Internet of Things (IoT), Ultra Reliable Low Latency Communications (URLLC) has potentially been the enabler to guarantee the stringent requirements on latency and reliability. However, how to achieve low latency and ultra-reliability with the random arrival remains open. In this paper, a queue-aware variable-length channel coding is presented over the single URLLC user link, in which the finite blocklength of channel coding is determined based on the random arrival. More particularly, a cross-layer approach is proposed for the URLLC user to establish the optimal tradeoff between the latency and power consumption. With a probabilistic coding framework presented, the cross-layer variable-length coding can be characterized based on a Markov chain. In this way, the optimal delay-power tradeoff is given by formulating an equivalent Linear Programming (LP). By solving this LP, the delay-optimal variable-length coding can be presented based on a threshold-structure on the queue length.
In this paper we consider lossless source coding for a class of sources specified by the total variational distance ball centred at a fixed nominal probability distribution. The objective is to find a minimax average length source code, where the min imizers are the codeword lengths -- real numbers for arithmetic or Shannon codes -- while the maximizers are the source distributions from the total variational distance ball. Firstly, we examine the maximization of the average codeword length by converting it into an equivalent optimization problem, and we give the optimal codeword lenghts via a waterfilling solution. Secondly, we show that the equivalent optimization problem can be solved via an optimal partition of the source alphabet, and re-normalization and merging of the fixed nominal probabilities. For the computation of the optimal codeword lengths we also develop a fast algorithm with a computational complexity of order ${cal O}(n)$.
The error exponent of Markov channels with feedback is studied in the variable-length block-coding setting. Burnashevs classic result is extended and a single letter characterization for the reliability function of finite-state Markov channels is pre sented, under the assumption that the channel state is causally observed both at the transmitter and at the receiver side. Tools from stochastic control theory are used in order to treat channels with intersymbol interference. In particular the convex analytical approach to Markov decision processes is adopted to handle problems with stopping time horizons arising from variable-length coding schemes.
We study whether using non-orthogonal multiple access (NOMA) in the uplink of a mobile network can improve the performance over orthogonal multiple access (OMA) when the system requires ultra-reliable low-latency communications (URLLC). To answer thi s question, we first consider an ideal system model with perfect channel state information (CSI) at the transmitter and long codewords, where we determine the optimal decoding orders when the decoder uses successive interference cancellation (SIC) and derive closed-form expressions for the optimal rate when joint decoding is used. While joint decoding performs well even under tight delay constraints, NOMA with SIC decoding often performs worse than OMA. For low-latency systems, we must also consider the impact of finite-length channel coding, as well as rate adaptation based imperfect CSI. We derive closed-form approximations for the corresponding outage or error probabilities and find that those effects create a larger performance penalty for NOMA than for OMA. Thus, NOMA with SIC decoding may often be unsuitable for URLLC.
Symmetrical multilevel diversity coding (SMDC) is a classical model for coding over distributed storage. In this setting, a simple separate encoding strategy known as superposition coding was shown to be optimal in terms of achieving the minimum sum rate (Roche, Yeung, and Hau, 1997) and the entire admissible rate region (Yeung and Zhang, 1999) of the problem. The proofs utilized carefully constructed induction arguments, for which the classical subset entropy inequality of Han (1978) played a key role. This paper includes two parts. In the first part the existing optimality proofs for classical SMDC are revisited, with a focus on their connections to subset entropy inequalities. First, a new sliding-window subset entropy inequality is introduced and then used to establish the optimality of superposition coding for achieving the minimum sum rate under a weaker source-reconstruction requirement. Second, a subset entropy inequality recently proved by Madiman and Tetali (2010) is used to develop a new structural understanding to the proof of Yeung and Zhang on the optimality of superposition coding for achieving the entire admissible rate region. Building on the connections between classical SMDC and the subset entropy inequalities developed in the first part, in the second part the optimality of superposition coding is further extended to the cases where there is either an additional all-access encoder (SMDC-A) or an additional secrecy constraint (S-SMDC).
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا