ترغب بنشر مسار تعليمي؟ اضغط هنا

Analysis and Optimization of Tail-Biting Spatially Coupled Protograph LDPC Codes for BICM-ID Systems

81   0   0.0 ( 0 )
 نشر من قبل Yi Fang
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

As a typical example of bandwidth-efficient techniques, bit-interleaved coded modulation with iterative decoding (BICM-ID) provides desirable spectral efficiencies in various wireless communication scenarios. In this paper, we carry out a comprehensive investigation on tail-biting (TB) spatially coupled protograph (SCP) low-density parity-check (LDPC) codes in BICM-ID systems. Specifically, we first develop a two-step design method to formulate a novel type of constellation mappers, referred to as labeling-bit-partial-match (LBPM) constellation mappers, for SC-P-based BICM-ID systems. The LBPM constellation mappers can be seamlessly combined with high-order modulations, such as M-ary phase-shift keying (PSK) and M-ary quadrature amplitude modulation (QAM). Furthermore, we conceive a new bit-level interleaving scheme, referred to as variable node matched mapping (VNMM) scheme, which can substantially exploit the structure feature of SC-P codes and the unequal protection-degree property of labeling bits to trigger the wave-like convergence for TB-SC-P codes. In addition, we propose a hierarchical extrinsic information transfer (EXIT) algorithm to predict the convergence performance (i.e., decoding thresholds) of the proposed SC-P-based BICM-ID systems. Theoretical analyses and simulation results illustrate that the LBPM-mapped SC-P-based BICM-ID systems are remarkably superior to the state-of-the-art mapped counterparts. Moreover, the proposed SC-P-based BICM-ID systems can achieve even better error performance with the aid of the VNMM scheme. As a consequence, the proposed LBPM constellation mappers and VNMM scheme make the SC-P-based BICM-ID systems a favorable choice for the future-generation wireless communication systems.


قيم البحث

اقرأ أيضاً

The recent development of deep learning methods provides a new approach to optimize the belief propagation (BP) decoding of linear codes. However, the limitation of existing works is that the scale of neural networks increases rapidly with the codele ngth, thus they can only support short to moderate codelengths. From the point view of practicality, we propose a high-performance neural min-sum (MS) decoding method that makes full use of the lifting structure of protograph low-density parity-check (LDPC) codes. By this means, the size of the parameter array of each layer in the neural decoder only equals the number of edge-types for arbitrary codelengths. In particular, for protograph LDPC codes, the proposed neural MS decoder is constructed in a special way such that identical parameters are shared by a bundle of edges derived from the same edge-type. To reduce the complexity and overcome the vanishing gradient problem in training the proposed neural MS decoder, an iteration-by-iteration (i.e., layer-by-layer in neural networks) greedy training method is proposed. With this, the proposed neural MS decoder tends to be optimized with faster convergence, which is aligned with the early termination mechanism widely used in practice. To further enhance the generalization ability of the proposed neural MS decoder, a codelength/rate compatible training method is proposed, which randomly selects samples from a set of codes lifted from the same base code. As a theoretical performance evaluation tool, a trajectory-based extrinsic information transfer (T-EXIT) chart is developed for various decoders. Both T-EXIT and simulation results show that the optimized MS decoding can provide faster convergence and up to 1dB gain compared with the plain MS decoding and its variants with only slightly increased complexity. In addition, it can even outperform the sum-product algorithm for some short codes.
Generalized low-density parity-check (GLDPC) codes are a class of LDPC codes in which the standard single parity check (SPC) constraints are replaced by constraints defined by a linear block code. These stronger constraints typically result in improv ed error floor performance, due to better minimum distance and trapping set properties, at a cost of some increased decoding complexity. In this paper, we study spatially coupled generalized low-density parity-check (SC-GLDPC) codes and present a comprehensive analysis of these codes, including: (1) an iterative decoding threshold analysis of SC-GLDPC code ensembles demonstrating capacity approaching thresholds via the threshold saturation effect; (2) an asymptotic analysis of the minimum distance and free distance properties of SC-GLDPC code ensembles, demonstrating that the ensembles are asymptotically good; and (3) an analysis of the finite-length scaling behavior of both GLDPC block codes and SC-GLDPC codes based on a peeling decoder (PD) operating on a binary erasure channel (BEC). Results are compared to GLDPC block codes, and the advantages and disadvantages of SC-GLDPC codes are discussed.
Linear nested codes, where two or more sub-codes are nested in a global code, have been proposed as candidates for reliable multi-terminal communication. In this paper, we consider nested array-based spatially coupled low-density parity-check (SC-LDP C) codes and propose a line-counting based optimization scheme for minimizing the number of dominant absorbing sets in order to improve its performance in the high signal-to-noise ratio regime. Since the parity-check matrices of different nested sub-codes partially overlap, the optimization of one nested sub-code imposes constraints on the optimization of the other sub-codes. To tackle these constraints, a multi-step optimization process is applied first to one of the nested codes, then sequential optimization of the remaining nested codes is carried out based on the constraints imposed by the previously optimized sub-codes. Results show that the order of optimization has a significant impact on the number of dominant absorbing sets in the Tanner graph of the code, resulting in a tradeoff between the performance of a nested code structure and its optimization sequence: the code which is optimized without constraints has fewer harmful structures than the code which is optimized with constraints. We also show that for certain code parameters, dominant absorbing sets in the Tanner graphs of all nested codes are completely removed using our proposed optimization strategy.
170 - Eshed Ram , Yuval Cassuto 2019
A new type of spatially coupled low-density parity-check (SC-LDPC) codes motivated by practical storage applications is presented. SC-LDPCL codes (suffix L stands for locality) can be decoded locally at the level of sub-blocks that are much smaller t han the full code block, thus offering flexible access to the coded information alongside the strong reliability of the global full-block decoding. Toward that, we propose constructions of SC-LDPCL codes that allow controlling the trade-off between local and global correction performance. In addition to local and global decoding, the paper develops a density-evolution analysis for a decoding mode we call semi-global decoding, in which the decoder has access to the requested sub-block plus a prescribed number of sub-blocks around it. SC-LDPCL codes are also studied under a channel model with variability across sub-blocks, for which decoding-performance lower bounds are derived.
In this paper, we propose a non-uniform windowed decoder for multi-dimensional spatially-coupled LDPC (MD-SC-LDPC) codes over the binary erasure channel. An MD-SC-LDPC code is constructed by connecting together several SC-LDPC codes into one larger c ode that provides major benefits over a variety of channel models. In general, SC codes allow for low-latency windowed decoding. While a standard windowed decoder can be naively applied, such an approach does not fully utilize the unique structure of MD-SC-LDPC codes. In this paper, we propose and analyze a novel non-uniform decoder to provide more flexibility between latency and reliability. Our theoretical derivations and empirical results show that our non-uniform decoder greatly improves upon the standard windowed decoder in terms of design flexibility, latency, and complexity.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا