Do you want to publish a course? Click here

Constructions of maximally recoverable local reconstruction codes via function fields

81   0   0.0 ( 0 )
 Added by Lingfei Jin
 Publication date 2018
and research's language is English




Ask ChatGPT about the research

Local Reconstruction Codes (LRCs) allow for recovery from a small number of erasures in a local manner based on just a few other codeword symbols. A maximally recoverable (MR) LRC offers the best possible blend of such local and global fault tolerance, guaranteeing recovery from all erasure patterns which are information-theoretically correctable given the presence of local recovery groups. In an $(n,r,h,a)$-LRC, the $n$ codeword symbols are partitioned into $r$ disjoint groups each of which include $a$ local parity checks capable of locally correcting $a$ erasures. MR LRCs have received much attention recently, with many explicit constructions covering different regimes of parameters. Unfortunately, all known constructions require a large field size that exponential in $h$ or $a$, and it is of interest to obtain MR LRCs of minimal possible field size. In this work, we develop an approach based on function fields to construct MR LRCs. Our method recovers, and in most parameter regimes improves, the field size of previous approaches. For instance, for the case of small $r ll epsilon log n$ and large $h ge Omega(n^{1-epsilon})$, we improve the field size from roughly $n^h$ to $n^{epsilon h}$. For the case of $a=1$ (one local parity check), we improve the field size quadratically from $r^{h(h+1)}$ to $r^{h lfloor (h+1)/2 rfloor}$ for some range of $r$. The improvements are modest, but more importantly are obtained in a unified manner via a promising new idea.



rate research

Read More

An $(m,n,a,b)$-tensor code consists of $mtimes n$ matrices whose columns satisfy `$a$ parity checks and rows satisfy `$b$ parity checks (i.e., a tensor code is the tensor product of a column code and row code). Tensor codes are useful in distributed storage because a single erasure can be corrected quickly either by reading its row or column. Maximally Recoverable (MR) Tensor Codes, introduced by Gopalan et al., are tensor codes which can correct every erasure pattern that is information theoretically possible to correct. The main questions about MR Tensor Codes are characterizing which erasure patterns are correctable and obtaining explicit constructions over small fields. In this paper, we study the important special case when $a=1$, i.e., the columns satisfy a single parity check equation. We introduce the notion of higher order MDS codes (MDS$(ell)$ codes) which is an interesting generalization of the well-known MDS codes, where $ell$ captures the order of genericity of points in a low-dimensional space. We then prove that a tensor code with $a=1$ is MR iff the row code is an MDS$(m)$ code. We then show that MDS$(m)$ codes satisfy some weak duality. Using this characterization and duality, we prove that $(m,n,a=1,b)$-MR tensor codes require fields of size $q=Omega_{m,b}(n^{min{b,m}-1})$. Our lower bound also extends to the setting of $a>1$. We also give a deterministic polynomial time algorithm to check if a given erasure pattern is correctable by the MR tensor code (when $a=1$).
An $(n,r,h,a,q)$-Local Reconstruction Code is a linear code over $mathbb{F}_q$ of length $n$, whose codeword symbols are partitioned into $n/r$ local groups each of size $r$. Each local group satisfies `$a$ local parity checks to recover from `$a$ erasures in that local group and there are further $h$ global parity checks to provide fault tolerance from more global erasure patterns. Such an LRC is Maximally Recoverable (MR), if it offers the best blend of locality and global erasure resilience -- namely it can correct all erasure patterns whose recovery is information-theoretically feasible given the locality structure (these are precisely patterns with up to `$a$ erasures in each local group and an additional $h$ erasures anywhere in the codeword). Random constructions can easily show the existence of MR LRCs over very large fields, but a major algebraic challenge is to construct MR LRCs, or even show their existence, over smaller fields, as well as understand inherent lower bounds on their field size. We give an explicit construction of $(n,r,h,a,q)$-MR LRCs with field size $q$ bounded by $left(Oleft(max{r,n/r}right)right)^{min{h,r-a}}$. This improves upon known constructions in many relevant parameter ranges. Moreover, it matches the lower bound from Gopi et al. (2020) in an interesting range of parameters where $r=Theta(sqrt{n})$, $r-a=Theta(sqrt{n})$ and $h$ is a fixed constant with $hle a+2$, achieving the optimal field size of $Theta_{h}(n^{h/2}).$ Our construction is based on the theory of skew polynomials. We believe skew polynomials should have further applications in coding and complexity theory; as a small illustration we show how to capture algebraic results underlying list decoding folded Reed-Solomon and multiplicity codes in a unified way within this theory.
We construct maximally recoverable codes (corresponding to partial MDS codes) which are based on linearized Reed-Solomon codes. The new codes have a smaller field size requirement compared with known constructions. For certain asymptotic regimes, the constructed codes have order-optimal alphabet size, asymptotically matching the known lower bound.
100 - Carlos Munuera 2018
A locally recoverable code is an error-correcting code such that any erasure in a coordinate of a codeword can be recovered from a set of other few coordinates. In this article we introduce a model of local recoverable codes that also includes local error detection. The cases of the Reed-Solomon and Locally Recoverable Reed-Solomon codes are treated in some detail.
328 - Arya Mazumdar 2018
Motivated by applications in distributed storage, the notion of a locally recoverable code (LRC) was introduced a few years back. In an LRC, any coordinate of a codeword is recoverable by accessing only a small number of other coordinates. While different properties of LRCs have been well-studied, their performance on channels with random erasures or errors has been mostly unexplored. In this note, we analyze the performance of LRCs over such stochastic channels. In particular, for input-symmetric discrete memoryless channels, we give a tight characterization of the gap to Shannon capacity when LRCs are used over the channel.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا