ترغب بنشر مسار تعليمي؟ اضغط هنا

Optimal Merging Algorithms for Lossless Codes with Generalized Criteria

172   0   0.0 ( 0 )
 نشر من قبل Themistoklis Charalambous
 تاريخ النشر 2011
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

This paper presents lossless prefix codes optimized with respect to a pay-off criterion consisting of a convex combination of maximum codeword length and average codeword length. The optimal codeword lengths obtained are based on a new coding algorithm which transforms the initial source probability vector into a new probability vector according to a merging rule. The coding algorithm is equivalent to a partition of the source alphabet into disjoint sets on which a new transformed probability vector is defined as a function of the initial source probability vector and a scalar parameter. The pay-off criterion considered encompasses a trade-off between maximum and average codeword length; it is related to a pay-off criterion consisting of a convex combination of average codeword length and average of an exponential function of the codeword length, and to an average codeword length pay-off criterion subject to a limited length constraint. A special case of the first related pay-off is connected to coding problems involving source probability uncertainty and codeword overflow probability, while the second related pay-off compliments limited length Huffman coding algorithms.



قيم البحث

اقرأ أيضاً

In this paper we consider lossless source coding for a class of sources specified by the total variational distance ball centred at a fixed nominal probability distribution. The objective is to find a minimax average length source code, where the min imizers are the codeword lengths -- real numbers for arithmetic or Shannon codes -- while the maximizers are the source distributions from the total variational distance ball. Firstly, we examine the maximization of the average codeword length by converting it into an equivalent optimization problem, and we give the optimal codeword lenghts via a waterfilling solution. Secondly, we show that the equivalent optimization problem can be solved via an optimal partition of the source alphabet, and re-normalization and merging of the fixed nominal probabilities. For the computation of the optimal codeword lengths we also develop a fast algorithm with a computational complexity of order ${cal O}(n)$.
We prove that, for the binary erasure channel (BEC), the polar-coding paradigm gives rise to codes that not only approach the Shannon limit but do so under the best possible scaling of their block length as a~function of the gap to capacity. This res ult exhibits the first known family of binary codes that attain both optimal scaling and quasi-linear complexity of encoding and decoding. Our proof is based on the construction and analysis of binary polar codes with large kernels. When communicating reliably at rates within $varepsilon > 0$ of capacity, the code length $n$ often scales as $O(1/varepsilon^{mu})$, where the constant $mu$ is called the scaling exponent. It is known that the optimal scaling exponent is $mu=2$, and it is achieved by random linear codes. The scaling exponent of conventional polar codes (based on the $2times 2$ kernel) on the BEC is $mu=3.63$. This falls far short of the optimal scaling guaranteed by random codes. Our main contribution is a rigorous proof of the following result: for the BEC, there exist $elltimesell$ binary kernels, such that polar codes constructed from these kernels achieve scaling exponent $mu(ell)$ that tends to the optimal value of $2$ as $ell$ grows. We furthermore characterize precisely how large $ell$ needs to be as a function of the gap between $mu(ell)$ and $2$. The resulting binary codes maintain the recursive structure of conventional polar codes, and thereby achieve construction complexity $O(n)$ and encoding/decoding complexity $O(nlog n)$.
Braided convolutional codes (BCCs) are a class of spatially coupled turbo-like codes that can be described by a $(2,3)$-regular compact graph. In this paper, we introduce a family of $(d_v,d_c)$-regular GLDPC codes with convolutional code constraints (CC-GLDPC codes), which form an extension of classical BCCs to arbitrary regular graphs. In order to characterize the performance in the waterfall and error floor regions, we perform an analysis of the density evolution thresholds as well as the finite-length ensemble weight enumerators and minimum distances of the ensembles. In particular, we consider various ensembles of overall rate $R=1/3$ and $R=1/2$ and study the trade-off between variable node degree and strength of the component codes. We also compare the results to corresponding classical LDPC codes with equal degrees and rates. It is observed that for the considered LDPC codes with variable node degree $d_v>2$, we can find a CC-GLDPC code with smaller $d_v$ that offers similar or better performance in terms of BP and MAP thresholds at the expense of a negligible loss in the minimum distance.
Streaming codes represent a packet-level FEC scheme for achieving reliable, low-latency communication. In the literature on streaming codes, the commonly-assumed Gilbert-Elliott channel model, is replaced by a more tractable, delay-constrained, slidi ng-window (DCSW) channel model that can introduce either random or burst erasures. The known streaming codes that are rate optimal over the DCSW channel model are constructed by diagonally embedding a scalar block code across successive packets. These code constructions have field size that is quadratic in the delay parameter $tau$ and have a somewhat complex structure with an involved decoding procedure. This led to the introduction of simple streaming (SS) codes in which diagonal embedding is replaced by staggered-diagonal embedding (SDE). The SDE approach reduces the impact of a burst of erasures and makes it possible to construct near-rate-optimal streaming codes using Maximum Distance Separable (MDS) code having linear field size. The present paper takes this development one step further, by retaining the staggered-diagonal feature, but permitting the placement of more than one code symbol from a given scalar codeword within each packet. These generalized, simple streaming codes allow us to improve upon the rate of SS codes, while retaining the simplicity of working with MDS codes. We characterize the maximum code rate of streaming codes under a constraint on the number of contiguous packets over which symbols of the underlying scalar code are dispersed. Such a constraint leads to simplified code construction and reduced-complexity decoding.
High availability of containerized applications requires to perform robust storage of applications state. Since basic replication techniques are extremely costly at scale, storage space requirements can be reduced by means of erasure or repairing cod es. In this paper we address storage regeneration using repair codes, a robust distributed storage technique with no need to fully restore the whole state in case of failure. In fact, only the lost servers content is replaced. To do so, new cleanslate storage units are made operational at a cost for activating new storage servers and a cost for the transfer of repair data. Our goal is to guarantee maximal availability of containers state files by a given deadline. activation of servers and communication cost. Upon a fault occurring at a subset of the storage servers, we aim at ensuring that they are repaired by a given deadline. We introduce a controlled fluid model and derive the optimal activation policy to replace servers under such correlated faults. The solution concept is the optimal control of regeneration via the Pontryagin minimum principle. We characterise feasibility conditions and we prove that the optimal policy is of threshold type. Numerical results describe how to apply the model for system dimensioning and show the tradeoff between
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا