Do you want to publish a course? Click here

Performance Analysis of Raptor Codes under Maximum-Likelihood (ML) Decoding

195   0   0.0 ( 0 )
 Added by Peng Wang
 Publication date 2015
and research's language is English




Ask ChatGPT about the research

Raptor codes have been widely used in many multimedia broadcast/multicast applications. However, our understanding of Raptor codes is still incomplete due to the insufficient amount of theoretical work on the performance analysis of Raptor codes, particularly under maximum-likelihood (ML) decoding, which provides an optimal benchmark on the system performance for the other decoding schemes to compare against. For the first time, this paper provides an upper bound and a lower bound, on the packet error performance of Raptor codes under ML decoding, which is measured by the probability that all source packets can be successfully decoded by a receiver with a given number of successfully received coded packets. Simulations are conducted to validate the accuracy of the analysis. More specifically, Raptor codes with different degree distribution and pre-coders, are evaluated using the derived bounds with high accuracy.



rate research

Read More

A complexity-adaptive tree search algorithm is proposed for $boldsymbol{G}_N$-coset codes that implements maximum-likelihood (ML) decoding by using a successive decoding schedule. The average complexity is close to that of the successive cancellation (SC) decoding for practical error rates when applied to polar codes and short Reed-Muller (RM) codes, e.g., block lengths up to $N=128$. By modifying the algorithm to limit the worst-case complexity, one obtains a near-ML decoder for longer RM codes and their subcodes. Unlike other bit-flip decoders, no outer code is needed to terminate decoding. The algorithm can thus be applied to modified $boldsymbol{G}_N$-coset code constructions with dynamic frozen bits. One advantage over sequential decoders is that there is no need to optimize a separate parameter.
A low-complexity tree search approach is presented that achieves the maximum-likelihood (ML) decoding performance of Reed-Muller (RM) codes. The proposed approach generates a bit-flipping tree that is traversed to find the ML decoding result by performing successive-cancellation decoding after each node visit. A depth-first search (DFS) and a breadth-first search (BFS) scheme are developed and a log-likelihood-ratio-based bit-flipping metric is utilized to avoid redundant node visits in the tree. Several enhancements to the proposed algorithm are presented to further reduce the number of node visits. Simulation results confirm that the BFS scheme provides a lower average number of node visits than the existing tree search approach to decode RM codes.
We formulate maximum likelihood (ML) channel decoding as a quadratic unconstraint binary optimization (QUBO) and simulate the decoding by the current commercial quantum annealing machine, D-Wave 2000Q. We prepared two implementations with Ising model formulations, generated from the generator matrix and the parity-check matrix respectively. We evaluated these implementations of ML decoding for low-density parity-check (LDPC) codes, analyzing the number of spins and connections and comparing the decoding performance with belief propagation (BP) decoding and brute-force ML decoding with classical computers. The results show that these implementations are superior to BP decoding in relatively short length codes, and while the performance in the long length codes deteriorates, the implementation from the parity-check matrix formulation still works up to 1k length with fewer spins and connections than that of the generator matrix formulation due to the sparseness of parity-check matrices of LDPC.
CA-Polar codes have been selected for all control channel communications in 5G NR, but accurate, computationally feasible decoders are still subject to development. Here we report the performance of a recently proposed class of optimally precise Maximum Likelihood (ML) decoders, GRAND, that can be used with any block-code. As published theoretical results indicate that GRAND is computationally efficient for short-length, high-rate codes and 5G CA-Polar codes are in that class, here we consider GRANDs utility for decoding them. Simulation results indicate that decoding of 5G CA-Polar codes by GRAND, and a simple soft detection variant, is a practical possibility.
Polar codes represent one of the major recent breakthroughs in coding theory and, because of their attractive features, they have been selected for the incoming 5G standard. As such, a lot of attention has been devoted to the development of decoding algorithms with good error performance and efficient hardware implementation. One of the leading candidates in this regard is represented by successive-cancellation list (SCL) decoding. However, its hardware implementation requires a large amount of memory. Recently, a partitioned SCL (PSCL) decoder has been proposed to significantly reduce the memory consumption. In this paper, we examine the paradigm of PSCL decoding from both theoretical and practical standpoints: (i) by changing the construction of the code, we are able to improve the performance at no additional computational, latency or memory cost, (ii) we present an optimal scheme to allocate cyclic redundancy checks (CRCs), and (iii) we provide an upper bound on the list size that allows MAP performance.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا