ترغب بنشر مسار تعليمي؟ اضغط هنا

The Cycle Consistency Matrix Approach to Absorbing Sets in Separable Circulant-Based LDPC Codes

145   0   0.0 ( 0 )
 نشر من قبل Jiadong Wang
 تاريخ النشر 2012
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

For LDPC codes operating over additive white Gaussian noise channels and decoded using message-passing decoders with limited precision, absorbing sets have been shown to be a key factor in error floor behavior. Focusing on this scenario, this paper introduces the cycle consistency matrix (CCM) as a powerful analytical tool for characterizing and avoiding absorbing sets in separable circulant-based (SCB) LDPC codes. SCB codes include a wide variety of regular LDPC codes such as array-based LDPC codes as well as many common quasi-cyclic codes. As a consequence of its cycle structure, each potential absorbing set in an SCB LDPC code has a CCM, and an absorbing set can be present in an SCB LDPC code only if the associated CCM has a nontrivial null space. CCM-based analysis can determine the multiplicity of an absorbing set in an SCB code and CCM-based constructions avoid certain small absorbing sets completely. While these techniques can be applied to an SCB code of any rate, lower-rate SCB codes can usually avoid small absorbing sets because of their higher variable node degree. This paper focuses attention on the high-rate scenario in which the CCM constructions provide the most benefit. Simulation results demonstrate that under limited-precision decoding the new codes have steeper error-floor slopes and can provide one order of magnitude of improvement in the low FER region.

قيم البحث

اقرأ أيضاً

Linear nested codes, where two or more sub-codes are nested in a global code, have been proposed as candidates for reliable multi-terminal communication. In this paper, we consider nested array-based spatially coupled low-density parity-check (SC-LDP C) codes and propose a line-counting based optimization scheme for minimizing the number of dominant absorbing sets in order to improve its performance in the high signal-to-noise ratio regime. Since the parity-check matrices of different nested sub-codes partially overlap, the optimization of one nested sub-code imposes constraints on the optimization of the other sub-codes. To tackle these constraints, a multi-step optimization process is applied first to one of the nested codes, then sequential optimization of the remaining nested codes is carried out based on the constraints imposed by the previously optimized sub-codes. Results show that the order of optimization has a significant impact on the number of dominant absorbing sets in the Tanner graph of the code, resulting in a tradeoff between the performance of a nested code structure and its optimization sequence: the code which is optimized without constraints has fewer harmful structures than the code which is optimized with constraints. We also show that for certain code parameters, dominant absorbing sets in the Tanner graphs of all nested codes are completely removed using our proposed optimization strategy.
The recent development of deep learning methods provides a new approach to optimize the belief propagation (BP) decoding of linear codes. However, the limitation of existing works is that the scale of neural networks increases rapidly with the codele ngth, thus they can only support short to moderate codelengths. From the point view of practicality, we propose a high-performance neural min-sum (MS) decoding method that makes full use of the lifting structure of protograph low-density parity-check (LDPC) codes. By this means, the size of the parameter array of each layer in the neural decoder only equals the number of edge-types for arbitrary codelengths. In particular, for protograph LDPC codes, the proposed neural MS decoder is constructed in a special way such that identical parameters are shared by a bundle of edges derived from the same edge-type. To reduce the complexity and overcome the vanishing gradient problem in training the proposed neural MS decoder, an iteration-by-iteration (i.e., layer-by-layer in neural networks) greedy training method is proposed. With this, the proposed neural MS decoder tends to be optimized with faster convergence, which is aligned with the early termination mechanism widely used in practice. To further enhance the generalization ability of the proposed neural MS decoder, a codelength/rate compatible training method is proposed, which randomly selects samples from a set of codes lifted from the same base code. As a theoretical performance evaluation tool, a trajectory-based extrinsic information transfer (T-EXIT) chart is developed for various decoders. Both T-EXIT and simulation results show that the optimized MS decoding can provide faster convergence and up to 1dB gain compared with the plain MS decoding and its variants with only slightly increased complexity. In addition, it can even outperform the sum-product algorithm for some short codes.
133 - Qin Huang , Keke Liu , Zulin Wang 2012
This paper is concerned with general analysis on the rank and row-redundancy of an array of circulants whose null space defines a QC-LDPC code. Based on the Fourier transform and the properties of conjugacy classes and Hadamard products of matrices, we derive tight upper bounds on rank and row-redundancy for general array of circulants, which make it possible to consider row-redundancy in constructions of QC-LDPC codes to achieve better performance. We further investigate the rank of two types of construction of QC-LDPC codes: constructions based on Vandermonde Matrices and Latin Squares and give combinatorial expression of the exact rank in some specific cases, which demonstrates the tightness of the bound we derive. Moreover, several types of new construction of QC-LDPC codes with large row-redundancy are presented and analyzed.
Spatially coupled codes have been shown to universally achieve the capacity for a large class of channels. Many variants of such codes have been introduced to date. We discuss a further such variant that is particularly simple and is determined by a very small number of parameters. More precisely, we consider time-invariant low-density convolutional codes with very large constraint lengths. We show via simulations that, despite their extreme simplicity, such codes still show the threshold saturation behavior known from the spatially coupled codes discussed in the literature. Further, we show how the size of the typical minimum stopping set is related to basic parameters of the code. Due to their simplicity and good performance, these codes might be attractive from an implementation perspective.
Min-Sum decoding is widely used for decoding LDPC codes in many modern digital video broadcasting decoding due to its relative low complexity and robustness against quantization error. However, the suboptimal performance of the Min-Sum affects the in tegrated performance of wireless receivers. In this paper, we present the idea of adapting the scaling factor of the Min-Sum decoder with iterations through a simple approximation. For the ease of implementation the scaling factor can be changed in a staircase fashion. The stair step is designed to optimize the decoder performance and the required storage for its different values. The variable scaling factor proposed algorithm produces a non-trivial improvement of the performance of the Min-Sum decoding as verified by simulation results.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا