Do you want to publish a course? Click here

Generalized Simplified Variable-Scaled Min Sum LDPC decoder for irregular LDPC Codes

314   0   0.0 ( 0 )
 Added by Ahmed Emran Mr
 Publication date 2015
and research's language is English




Ask ChatGPT about the research

In this paper, we propose a novel low complexity scaling strategy of min-sum decoding algorithm for irregular LDPC codes. In the proposed method, we generalize our previously proposed simplified Variable Scaled Min-Sum (SVS-min-sum) by replacing the sub-optimal starting value and heuristic update for the scaling factor sequence by optimized values. Density evolution and Nelder-Mead optimization are used offline, prior to the decoding, to obtain the optimal starting point and per iteration updating step size for the scaling factor sequence of the proposed scaling strategy. The optimization of these parameters proves to be of noticeable positive impact on the decoding performance. We used different DVB-T2 LDPC codes in our simulation. Simulation results show the superior performance (in both WER and latency) of the proposed algorithm to other Min-Sum based algorithms. In addition to that, generalized SVS-min-sum algorithm has very close performance to LLR-SPA with much lower complexity.



rate research

Read More

Min-Sum decoding is widely used for decoding LDPC codes in many modern digital video broadcasting decoding due to its relative low complexity and robustness against quantization error. However, the suboptimal performance of the Min-Sum affects the integrated performance of wireless receivers. In this paper, we present the idea of adapting the scaling factor of the Min-Sum decoder with iterations through a simple approximation. For the ease of implementation the scaling factor can be changed in a staircase fashion. The stair step is designed to optimize the decoder performance and the required storage for its different values. The variable scaling factor proposed algorithm produces a non-trivial improvement of the performance of the Min-Sum decoding as verified by simulation results.
Non-binary low-density parity-check codes are robust to various channel impairments. However, based on the existing decoding algorithms, the decoder implementations are expensive because of their excessive computational complexity and memory usage. Based on the combinatorial optimization, we present an approximation method for the check node processing. The simulation results demonstrate that our scheme has small performance loss over the additive white Gaussian noise channel and independent Rayleigh fading channel. Furthermore, the proposed reduced-complexity realization provides significant savings on hardware, so it yields a good performance-complexity tradeoff and can be efficiently implemented.
In this paper, we analyze the tradeoff between coding rate and asymptotic performance of a class of generalized low-density parity-check (GLDPC) codes constructed by including a certain fraction of generalized constraint (GC) nodes in the graph. The rate of the GLDPC ensemble is bounded using classical results on linear block codes, namely Hamming bound and Varshamov bound. We also study the impact of the decoding method used at GC nodes. To incorporate both bounded-distance (BD) and Maximum Likelihood (ML) decoding at GC nodes into our analysis without resorting on multi-edge type of degree distributions (DDs), we propose the probabilistic peeling decoding (P-PD) algorithm, which models the decoding step at every GC node as an instance of a Bernoulli random variable with a successful decoding probability that depends on both the GC block code as well as its decoding algorithm. The P-PD asymptotic performance over the BEC can be efficiently predicted using standard techniques for LDPC codes such as density evolution (DE) or the differential equation method. Furthermore, for a class of GLDPC ensembles, we demonstrate that the simulated P-PD performance accurately predicts the actual performance of the GLPDC code under ML decoding at GC nodes. We illustrate our analysis for GLDPC code ensembles with regular and irregular DDs. In all cases, we show that a large fraction of GC nodes is required to reduce the original gap to capacity, but the optimal fraction is strictly smaller than one. We then consider techniques to further reduce the gap to capacity by means of random puncturing, and the inclusion of a certain fraction of generalized variable nodes in the graph.
Braided convolutional codes (BCCs) are a class of spatially coupled turbo-like codes that can be described by a $(2,3)$-regular compact graph. In this paper, we introduce a family of $(d_v,d_c)$-regular GLDPC codes with convolutional code constraints (CC-GLDPC codes), which form an extension of classical BCCs to arbitrary regular graphs. In order to characterize the performance in the waterfall and error floor regions, we perform an analysis of the density evolution thresholds as well as the finite-length ensemble weight enumerators and minimum distances of the ensembles. In particular, we consider various ensembles of overall rate $R=1/3$ and $R=1/2$ and study the trade-off between variable node degree and strength of the component codes. We also compare the results to corresponding classical LDPC codes with equal degrees and rates. It is observed that for the considered LDPC codes with variable node degree $d_v>2$, we can find a CC-GLDPC code with smaller $d_v$ that offers similar or better performance in terms of BP and MAP thresholds at the expense of a negligible loss in the minimum distance.
This paper considers density evolution for lowdensity parity-check (LDPC) and multi-edge type low-density parity-check (MET-LDPC) codes over the binary input additive white Gaussian noise channel. We first analyze three singleparameter Gaussian approximations for density evolution and discuss their accuracy under several conditions, namely at low rates, with punctured and degree-one variable nodes. We observe that the assumption of symmetric Gaussian distribution for the density-evolution messages is not accurate in the early decoding iterations, particularly at low rates and with punctured variable nodes. Thus single-parameter Gaussian approximation methods produce very poor results in these cases. Based on these observations, we then introduce a new density evolution approximation algorithm for LDPC and MET-LDPC codes. Our method is a combination of full density evolution and a single-parameter Gaussian approximation, where we assume a symmetric Gaussian distribution only after density-evolution messages closely follow a symmetric Gaussian distribution. Our method significantly improves the accuracy of the code threshold estimation. Additionally, the proposed method significantly reduces the computational time of evaluating the code threshold compared to full density evolution thereby making it more suitable for code design.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا