Do you want to publish a course? Click here

Optimal Quasi-Gray Codes: The Alphabet Matters

68   0   0.0 ( 0 )
 Publication date 2017
and research's language is English




Ask ChatGPT about the research

A quasi-Gray code of dimension $n$ and length $ell$ over an alphabet $Sigma$ is a sequence of distinct words $w_1,w_2,dots,w_ell$ from $Sigma^n$ such that any two consecutive words differ in at most $c$ coordinates, for some fixed constant $c>0$. In this paper we are interested in the read and write complexity of quasi-Gray codes in the bit-probe model, where we measure the number of symbols read and written in order to transform any word $w_i$ into its successor $w_{i+1}$. We present construction of quasi-Gray codes of dimension $n$ and length $3^n$ over the ternary alphabet ${0,1,2}$ with worst-case read complexity $O(log n)$ and write complexity $2$. This generalizes to arbitrary odd-size alphabets. For the binary alphabet, we present quasi-Gray codes of dimension $n$ and length at least $2^n - 20n$ with worst-case read complexity $6+log n$ and write complexity $2$. This complements a recent result by Raskin [Raskin 17] who shows that any quasi-Gray code over binary alphabet of length $2^n$ has read complexity $Omega(n)$. Our results significantly improve on previously known constructions and for the odd-size alphabets we break the $Omega(n)$ worst-case barrier for space-optimal (non-redundant) quasi-Gray codes with constant number of writes. We obtain our results via a novel application of algebraic tools together with the principles of catalytic computation [Buhrman et al. 14, Ben-Or and Cleve 92, Barrington 89, Coppersmith and Grossman 75].



rate research

Read More

102 - Michael B. Baer 2007
Let $P = {p(i)}$ be a measure of strictly positive probabilities on the set of nonnegative integers. Although the countable number of inputs prevents usage of the Huffman algorithm, there are nontrivial $P$ for which known methods find a source code that is optimal in the sense of minimizing expected codeword length. For some applications, however, a source code should instead minimize one of a family of nonlinear objective functions, $beta$-exponential means, those of the form $log_a sum_i p(i) a^{n(i)}$, where $n(i)$ is the length of the $i$th codeword and $a$ is a positive constant. Applications of such minimizations include a problem of maximizing the chance of message receipt in single-shot communications ($a<1$) and a problem of minimizing the chance of buffer overflow in a queueing system ($a>1$). This paper introduces methods for finding codes optimal for such exponential means. One method applies to geometric distributions, while another applies to distributions with lighter tails. The latter algorithm is applied to Poisson distributions. Both are extended to minimizing maximum pointwise redundancy.
Locally recoverable (LRC) codes have recently been a focus point of research in coding theory due to their theoretical appeal and applications in distributed storage systems. In an LRC code, any erased symbol of a codeword can be recovered by accessing only a small number of other symbols. For LRC codes over a small alphabet (such as binary), the optimal rate-distance trade-off is unknown. We present several new combinatorial bounds on LRC codes including the locality-aware sphere packing and Plotkin bounds. We also develop an approach to linear programming (LP) bounds on LRC codes. The resulting LP bound gives better estimates in examples than the other upper bounds known in the literature. Further, we provide the tightest known upper bound on the rate of linear LRC codes with a given relative distance, an improvement over the previous best known bounds.
This paper focuses on error-correcting codes that can handle a predefined set of specific error patterns. The need for such codes arises in many settings of practical interest, including wireless communication and flash memory systems. In many such settings, a smaller field size is achievable than that offered by MDS and other standard codes. We establish a connection between the minimum alphabet size for this generalized setting and the combinatorial properties of a hypergraph that represents the prespecified collection of error patterns. We also show a connection between error and erasure correcting codes in this specialized setting. This allows us to establish bounds on the minimum alphabet size and show an advantage of non-linear codes over linear codes in a generalized setting. We also consider a variation of the problem which allows a small probability of decoding error and relate it to an approximate version of hypergraph coloring.
Zero-error single-channel source coding has been studied extensively over the past decades. Its natural multi-channel generalization is however not well investigated. While the special case with multiple symmetric-alphabet channels was studied a decade ago, codes in such setting have no advantage over single-channel codes in data compression, making them worthless in most applications. With essentially no development since the last decade, in this paper, we break the stalemate by showing that it is possible to beat single-channel source codes in terms of compression assuming asymmetric-alphabet channels. We present the multi-channel analog of several classical results in single-channel source coding, such as that a multi-channel Huffman code is an optimal tree-decodable code. We also show some evidences that finding an efficient construction of multi-channel Huffman codes may be hard. Nevertheless, we propose a suboptimal code construction whose redundancy is guaranteed to be no larger than that of an optimal single-channel source code.
A Maximum Distance Separable code over an alphabet $F$ is defined via an encoding function $C:F^k rightarrow F^n$ that allows to retrieve a message $m in F^k$ from the codeword $C(m)$ even after erasing any $n-k$ of its symbols. The minimum possible alphabet size of general (non-linear) MDS codes for given parameters $n$ and $k$ is unknown and forms one of the central open problems in coding theory. The paper initiates the study of the alphabet size of codes in a generalized setting where the coding scheme is required to handle a pre-specified subset of all possible erasure patterns, naturally represented by an $n$-vertex $k$-uniform hypergraph. We relate the minimum possible alphabet size of such codes to the strong chromatic number of the hypergraph and analyze the tightness of the obtained bounds for both the linear and non-linear settings. We further consider variations of the problem which allow a small probability of decoding error.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا