Do you want to publish a course? Click here

Protograph-Based Decoding of LDPC Codes with Hamming Weight Amplifiers

110   0   0.0 ( 0 )
 Added by Hannes Bartz
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

A new protograph-based framework for message passing (MP) decoding of low density parity-check (LDPC) codes with Hamming weight amplifiers (HWAs), which are used e.g. in the NIST post-quantum crypto candidate LEDAcrypt, is proposed. The scheme exploits the correlations in the error patterns introduced by the HWA using a turbo-like decoding approach where messages between the decoders for the outer code given by the HWA and the inner LDPC code are exchanged. Decoding thresholds for the proposed scheme are computed using density evolution (DE) analysis for belief propagation (BP) and ternary message passing (TMP) decoding and compared to existing decoding approaches. The proposed scheme improves upon the basic approach of decoding LDPC code from the amplified error and has a similar performance as decoding the corresponding moderate-density parity-check (MDPC) code but with a significantly lower computational complexity.

rate research

Read More

This article discusses the decoding of Gabidulin codes and shows how to extend the usual decoder to any supercode of a Gabidulin code at the cost of a significant decrease of the decoding radius. Using this decoder, we provide polynomial time attacks on the rank-metric encryption schemes RAMESSES and LIGA.
The recent development of deep learning methods provides a new approach to optimize the belief propagation (BP) decoding of linear codes. However, the limitation of existing works is that the scale of neural networks increases rapidly with the codelength, thus they can only support short to moderate codelengths. From the point view of practicality, we propose a high-performance neural min-sum (MS) decoding method that makes full use of the lifting structure of protograph low-density parity-check (LDPC) codes. By this means, the size of the parameter array of each layer in the neural decoder only equals the number of edge-types for arbitrary codelengths. In particular, for protograph LDPC codes, the proposed neural MS decoder is constructed in a special way such that identical parameters are shared by a bundle of edges derived from the same edge-type. To reduce the complexity and overcome the vanishing gradient problem in training the proposed neural MS decoder, an iteration-by-iteration (i.e., layer-by-layer in neural networks) greedy training method is proposed. With this, the proposed neural MS decoder tends to be optimized with faster convergence, which is aligned with the early termination mechanism widely used in practice. To further enhance the generalization ability of the proposed neural MS decoder, a codelength/rate compatible training method is proposed, which randomly selects samples from a set of codes lifted from the same base code. As a theoretical performance evaluation tool, a trajectory-based extrinsic information transfer (T-EXIT) chart is developed for various decoders. Both T-EXIT and simulation results show that the optimized MS decoding can provide faster convergence and up to 1dB gain compared with the plain MS decoding and its variants with only slightly increased complexity. In addition, it can even outperform the sum-product algorithm for some short codes.
85 - Eshed Ram , Yuval Cassuto 2018
This paper presents a theoretical study of a new type of LDPC codes motivated by practical storage applications. LDPCL codes (suffix L represents locality) are LDPC codes that can be decoded either as usual over the full code block, or locally when a smaller sub-block is accessed (to reduce latency). LDPCL codes are designed to maximize the error-correction performance vs. rate in the usual (global) mode, while at the same time providing a certain performance in the local mode. We develop a theoretical framework for the design of LDPCL codes. Our results include a design tool to construct an LDPC code with two data-protection levels: local and global. We derive theoretical results supporting this tool and we show how to achieve capacity with it. A trade-off between the gap to capacity and the number of full-block accesses is studied, and a finite-length analysis of ML decoding is performed to exemplify a trade-off between the locality capability and the full-block error-correcting capability.
We consider families of codes obtained by lifting a base code $mathcal{C}$ through operations such as $k$-XOR applied to local views of codewords of $mathcal{C}$, according to a suitable $k$-uniform hypergraph. The $k$-XOR operation yields the direct sum encoding used in works of [Ta-Shma, STOC 2017] and [Dinur and Kaufman, FOCS 2017]. We give a general framework for list decoding such lifted codes, as long as the base code admits a unique decoding algorithm, and the hypergraph used for lifting satisfies certain expansion properties. We show that these properties are satisfied by the collection of length $k$ walks on an expander graph, and by hypergraphs corresponding to high-dimensional expanders. Instantiating our framework, we obtain list decoding algorithms for direct sum liftings on the above hypergraph families. Using known connections between direct sum and direct product, we also recover the recent results of Dinur et al. [SODA 2019] on list decoding for direct product liftings. Our framework relies on relaxations given by the Sum-of-Squares (SOS) SDP hierarchy for solving various constraint satisfaction problems (CSPs). We view the problem of recovering the closest codeword to a given word, as finding the optimal solution of a CSP. Constraints in the instance correspond to edges of the lifting hypergraph, and the solutions are restricted to lie in the base code $mathcal{C}$. We show that recent algorithms for (approximately) solving CSPs on certain expanding hypergraphs also yield a decoding algorithm for such lifted codes. We extend the framework to list decoding, by requiring the SOS solution to minimize a convex proxy for negative entropy. We show that this ensures a covering property for the SOS solution, and the condition and round approach used in several SOS algorithms can then be used to recover the required list of codewords.
106 - Jihao Fan , Jun Li , Ya Wang 2021
We utilize a concatenation scheme to construct new families of quantum error correction codes that include the Bacon-Shor codes. We show that our scheme can lead to asymptotically good quantum codes while Bacon-Shor codes cannot. Further, the concatenation scheme allows us to derive quantum LDPC codes of distance $Omega(N^{2/3}/loglog N)$ which can improve Hastingss recent result [arXiv:2102.10030] by a polylogarithmic factor. Moreover, assisted by the Evra-Kaufman-Zemor distance balancing construction, our concatenation scheme can yield quantum LDPC codes with non-vanishing code rates and better minimum distance upper bound than the hypergraph product quantum LDPC codes. Finally, we derive a family of fast encodable and decodable quantum concatenated codes with parameters ${Q}=[[N,Omega(sqrt{N}),Omega( sqrt{N})]]$ and they also belong to the Bacon-Shor codes. We show that ${Q}$ can be encoded very efficiently by circuits of size $O(N)$ and depth $O(sqrt{N})$, and can correct any adversarial error of weight up to half the minimum distance bound in $O(sqrt{N})$ time. To the best of our knowledge, they are the most powerful quantum codes for correcting so many adversarial errors in sublinear time by far.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا