Do you want to publish a course? Click here

Minimizing the alphabet size of erasure codes with restricted decoding sets

67   0   0.0 ( 0 )
 Added by Mira Gonen
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

A Maximum Distance Separable code over an alphabet $F$ is defined via an encoding function $C:F^k rightarrow F^n$ that allows to retrieve a message $m in F^k$ from the codeword $C(m)$ even after erasing any $n-k$ of its symbols. The minimum possible alphabet size of general (non-linear) MDS codes for given parameters $n$ and $k$ is unknown and forms one of the central open problems in coding theory. The paper initiates the study of the alphabet size of codes in a generalized setting where the coding scheme is required to handle a pre-specified subset of all possible erasure patterns, naturally represented by an $n$-vertex $k$-uniform hypergraph. We relate the minimum possible alphabet size of such codes to the strong chromatic number of the hypergraph and analyze the tightness of the obtained bounds for both the linear and non-linear settings. We further consider variations of the problem which allow a small probability of decoding error.

rate research

Read More

This paper focuses on error-correcting codes that can handle a predefined set of specific error patterns. The need for such codes arises in many settings of practical interest, including wireless communication and flash memory systems. In many such settings, a smaller field size is achievable than that offered by MDS and other standard codes. We establish a connection between the minimum alphabet size for this generalized setting and the combinatorial properties of a hypergraph that represents the prespecified collection of error patterns. We also show a connection between error and erasure correcting codes in this specialized setting. This allows us to establish bounds on the minimum alphabet size and show an advantage of non-linear codes over linear codes in a generalized setting. We also consider a variation of the problem which allows a small probability of decoding error and relate it to an approximate version of hypergraph coloring.
The multiplicity Schwartz-Zippel lemma bounds the total multiplicity of zeroes of a multivariate polynomial on a product set. This lemma motivates the multiplicity codes of Kopparty, Saraf and Yekhanin [J. ACM, 2014], who showed how to use this lemma to construct high-rate locally-decodable codes. However, the algorithmic results about these codes crucially rely on the fact that the polynomials are evaluated on a vector space and not an arbitrary product set. In this work, we show how to decode multivariate multiplicity codes of large multiplicities in polynomial time over finite product sets (over fields of large characteristic and zero characteristic). Previously such decoding algorithms were not known even for a positive fraction of errors. In contrast, our work goes all the way to the distance of the code and in particular exceeds both the unique decoding bound and the Johnson bound. For errors exceeding the Johnson bound, even combinatorial list-decodablity of these codes was not known. Our algorithm is an application of the classical polynomial method directly to the multivariate setting. In particular, we do not rely on a reduction from the multivariate to the univariate case as is typical of many of the existing results on decoding codes based on multivariate polynomials. However, a vanilla application of the polynomial method in the multivariate setting does not yield a polynomial upper bound on the list size. We obtain a polynomial bound on the list size by taking an alternative view of multivariate multiplicity codes. In this view, we glue all the partial derivatives of the same order together using a fresh set $z$ of variables. We then apply the polynomial method by viewing this as a problem over the field $mathbb{F}(z)$ of rational functions in $z$.
We address the problem of decoding Gabidulin codes beyond their unique error-correction radius. The complexity of this problem is of importance to assess the security of some rank-metric code-based cryptosystems. We propose an approach that introduces row or column erasures to decrease the rank of the error in order to use any proper polynomial-time Gabidulin code error-erasure decoding algorithm. This approach improves on generic rank-metric decoders by an exponential factor.
Few decoding algorithms for hyperbolic codes are known in the literature, this article tries to fill this gap. The first part of this work compares hyperbolic codes and Reed-Muller codes. In particular, we determine when a Reed-Muller code is a hyperbolic code. As a byproduct, we state when a hyperbolic code has greater dimension than a Reed-Muller code when they both have the same minimum distance. We use the previous ideas to describe how to decode a hyperbolic code using the largest Reed-Muller code contained in it, or alternatively using the smallest Reed-Muller code that contains it. A combination of these two algorithms is proposed for the case when hyperbolic codes are defined by polynomials in two variables. Then, we compare hyperbolic codes and Cube codes (tensor product of Reed-Solomon codes) and we propose decoding algorithms of hyperbolic codes based on their closest Cube codes. Finally, we adapt to hyperbolic codes the Geil and Matsumotos generalization of Sudans list decoding algorithm.
Locally recoverable (LRC) codes have recently been a focus point of research in coding theory due to their theoretical appeal and applications in distributed storage systems. In an LRC code, any erased symbol of a codeword can be recovered by accessing only a small number of other symbols. For LRC codes over a small alphabet (such as binary), the optimal rate-distance trade-off is unknown. We present several new combinatorial bounds on LRC codes including the locality-aware sphere packing and Plotkin bounds. We also develop an approach to linear programming (LP) bounds on LRC codes. The resulting LP bound gives better estimates in examples than the other upper bounds known in the literature. Further, we provide the tightest known upper bound on the rate of linear LRC codes with a given relative distance, an improvement over the previous best known bounds.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا