Do you want to publish a course? Click here

Exhausting Error-Prone Patterns in LDPC Codes

62   0   0.0 ( 0 )
 Added by Chih-Chun Wang
 Publication date 2006
and research's language is English




Ask ChatGPT about the research

It is proved in this work that exhaustively determining bad patterns in arbitrary, finite low-density parity-check (LDPC) codes, including stopping sets for binary erasure channels (BECs) and trapping sets (also known as near-codewords) for general memoryless symmetric channels, is an NP-complete problem, and efficient algorithms are provided for codes of practical short lengths n~=500. By exploiting the sparse connectivity of LDPC codes, the stopping sets of size <=13 and the trapping sets of size <=11 can be efficiently exhaustively determined for the first time, and the resulting exhaustive list is of great importance for code analysis and finite code optimization. The featured tree-based narrowing search distinguishes this algorithm from existing ones for which inexhaustive methods are employed. One important byproduct is a pair of upper bounds on the bit-error rate (BER) & frame-error rate (FER) iterative decoding performance of arbitrary codes over BECs that can be evaluated for any value of the erasure probability, including both the waterfall and the error floor regions. The tightness of these upper bounds and the exhaustion capability of the proposed algorithm are proved when combining an optimal leaf-finding module with the tree-based search. These upper bounds also provide a worst-case-performance guarantee which is crucial to optimizing LDPC codes for extremely low error rate applications, e.g., optical/satellite communications. Extensive numerical experiments are conducted that include both randomly and algebraically constructed LDPC codes, the results of which demonstrate the superior efficiency of the exhaustion algorithm and its significant value for finite length code optimization.



rate research

Read More

We analyze the performance of Low-Density-Parity-Check codes in the error-floor domain where the Signal-to-Noise-Ratio, s, is large, s >> 1. We describe how the instanton method of theoretical physics, recently adapted to coding theory, solves the problem of characterizing the error-floor domain in the Laplacian channel. An example of the (155,64,20) LDPC code with four iterations (each iteration consisting of two semi-steps: from bits-to-checks and from checks-to-bits) of the min-sum decoding is discussed. A generalized computational tree analysis is devised to explain the rational structure of the leading instantons. The asymptotic for the symbol Bit-Error-Rate in the error-floor domain is comprised of individual instanton contributions, each estimated as ~ exp(-l_{inst;L} s), where the effective distances, l_{inst;L}, of the the leading instantons are 7.6, 8.0 and 8.0 respectively. (The Hamming distance of the code is 20.) The analysis shows that the instantons are distinctly different from the ones found for the same coding/decoding scheme performing over the Gaussian channel. We validate instanton results against direct simulations and offer an explanation for remarkable performance of the instanton approximation not only in the extremal, s -> infty, limit but also at the moderate s values of practical interest.
Cyclic liftings are proposed to lower the error floor of low-density parity-check (LDPC) codes. The liftings are designed to eliminate dominant trapping sets of the base code by removing the short cycles which form the trapping sets. We derive a necessary and sufficient condition for the cyclic permutations assigned to the edges of a cycle $c$ of length $ell(c)$ in the base graph such that the inverse image of $c$ in the lifted graph consists of only cycles of length strictly larger than $ell(c)$. The proposed method is universal in the sense that it can be applied to any LDPC code over any channel and for any iterative decoding algorithm. It also preserves important properties of the base code such as degree distributions, encoder and decoder structure, and in some cases, the code rate. The proposed method is applied to both structured and random codes over the binary symmetric channel (BSC). The error floor improves consistently by increasing the lifting degree, and the results show significant improvements in the error floor compared to the base code, a random code of the same degree distribution and block length, and a random lifting of the same degree. Similar improvements are also observed when the codes designed for the BSC are applied to the additive white Gaussian noise (AWGN) channel.
Spatially coupled codes have been shown to universally achieve the capacity for a large class of channels. Many variants of such codes have been introduced to date. We discuss a further such variant that is particularly simple and is determined by a very small number of parameters. More precisely, we consider time-invariant low-density convolutional codes with very large constraint lengths. We show via simulations that, despite their extreme simplicity, such codes still show the threshold saturation behavior known from the spatially coupled codes discussed in the literature. Further, we show how the size of the typical minimum stopping set is related to basic parameters of the code. Due to their simplicity and good performance, these codes might be attractive from an implementation perspective.
Coherent errors are a dominant noise process in many quantum computing architectures. Unlike stochastic errors, these errors can combine constructively and grow into highly detrimental overrotations. To combat this, we introduce a simple technique for suppressing systematic coherent errors in low-density parity-check (LDPC) stabilizer codes, which we call stabilizer slicing. The essential idea is to slice low-weight stabilizers into two equally-weighted Pauli operators and then apply them by rotating in opposite directions, causing their overrotations to interfere destructively on the logical subspace. With access to native gates generated by 3-body Hamiltonians, we can completely eliminate purely coherent overrotation errors, and for overrotation noise of 0.99 unitarity we achieve a 135-fold improvement in the logical error rate of Surface-17. For more conventional 2-body ion trap gates, we observe an 89-fold improvement for Bacon-Shor-13 with purely coherent errors which should be testable in near-term fault-tolerance experiments. This second scheme takes advantage of the prepared gauge degrees of freedom, and to our knowledge is the first example in which the state of the gauge directly affects the robustness of a codes memory. This work demonstrates that coherent noise is preferable to stochastic noise within certain code and gate implementations when the coherence is utilized effectively.
We study Algebraic Geometry codes producing quantum error-correcting codes by the CSS construction. We pay particular attention to the family of Castle codes. We show that many of the examples known in the literature in fact belong to this family of codes. We systematize these constructions by showing the common theory that underlies all of them.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا