Do you want to publish a course? Click here

On Error-Correction Performance and Implementation of Polar Code List Decoders for 5G

70   0   0.0 ( 0 )
 Added by Furkan Ercan
 Publication date 2017
and research's language is English




Ask ChatGPT about the research

Polar codes are a class of capacity achieving error correcting codes that has been recently selected for the next generation of wireless communication standards (5G). Polar code decoding algorithms have evolved in various directions, striking different balances between error-correction performance, speed and complexity. Successive-cancellation list (SCL) and its incarnations constitute a powerful, well-studied set of algorithms, in constant improvement. At the same time, different implementation approaches provide a wide range of area occupations and latency results. 5G puts a focus on improved error-correction performance, high throughput and low power consumption: a comprehensive study considering all these metrics is currently lacking in literature. In this work, we evaluate SCL-based decoding algorithms in terms of error-correction performance and compare them to low-density parity-check (LDPC) codes. Moreover, we consider various decoder implementations, for both polar and LDPC codes, and compare their area occupation and power and energy consumption when targeting short code lengths and rates. Our work shows that among SCL-based decoders, the partitioned SCL (PSCL) provides the lowest area occupation and power consumption, whereas fast simplified SCL (Fast-SSCL) yields the lowest energy consumption. Compared to LDPC decoder architectures, different SCL implementations occupy up to 17.1x less area, dissipate up to 7.35x less power, and up to 26x less energy.



rate research

Read More

SC-Flip (SCF) is a low-complexity polar code decoding algorithm with improved performance, and is an alternative to high-complexity (CRC)-aided SC-List (CA-SCL) decoding. However, the performance improvement of SCF is limited since it can correct up to only one channel error ($omega=1$). Dynamic SCF (DSCF) algorithm tackles this problem by tackling multiple errors ($omega geq 1$), but it requires logarithmic and exponential computations, which make it infeasible for practical applications. In this work, we propose simplifications and approximations to make DSCF practically feasible. First, we reduce the transcendental computations of DSCF decoding to a constant approximation. Then, we show how to incorporate special node decoding techniques into DSCF algorithm, creating the Fast-DSCF decoding. Next, we reduce the search span within the special nodes to further reduce the computational complexity. Following, we describe a hardware architecture for the Fast-DSCF decoder, in which we introduce additional simplifications such as metric normalization and sorter length reduction. All the simplifications and approximations are shown to have minimal impact on the error-correction performance, and the reported Fast-DSCF decoder is the only SCF-based architecture that can correct multiple errors. The Fast-DSCF decoders synthesized using TSMC $65$nm CMOS technology can achieve a $1.25$, $1.06$ and $0.93$ Gbps throughput for $omega in {1,2,3}$, respectively. Compared to the state-of-the-art fast CA-SCL decoders with equivalent FER performance, the proposed decoders are up to $5.8times$ more area-efficient. Finally, observations at energy dissipation indicate that the Fast-DSCF is more energy-efficient than its CA-SCL-based counterparts.
This work identifies information-theoretic quantities that are closely related to the required list size for successive cancellation list (SCL) decoding to implement maximum-likelihood decoding. It also provides an approximation for these quantities that can be computed efficiently for very long codes. There is a concentration around the mean of the logarithm of the required list size for sufficiently large block lengths. We further provide a simple method to estimate the mean via density evolution for the binary erasure channel (BEC). Simulation results for the binary-input additive white Gaussian noise channel as well as the BEC demonstrate the accuracy of the mean estimate. A modified Reed-Muller code with dynamic frozen bits performs very close to the random coding union (RCU) bound down to the block error rate of $10^{-5}$ under SCL decoding with list size $L=128$ when the block length is $N=128$. The analysis shows how to modify the design to improve the performance when a more practical list size, e.g., $L=32$, is adopted while keeping the performance with $L=128$ unchanged. For the block length of $N=512$, a design performing within $0.4$ dB from the RCU bound down to the block error rate of $10^{-6}$ under an SCL decoder with list size $L=1024$ is provided. The design is modified using the new guidelines so that the performance improves with practical list sizes, e.g., $Lin{8,32,128}$, outperforming 5G designs.
This paper presents an efficient hardware design approach for list successive cancellation (LSC) decoding of polar codes. By applying path-overlapping scheme, the l instances of (l > 1) successive cancellation (SC) decoder for LSC with list size l can be cut down to only one. This results in a dramatic reduction of the hardware complexity without any decoding performance loss. We also develop novel approaches to reduce the latencyassociated with the pipeline scheme. Simulation results show that with proposed design approach the hardware efficiency is increased significantly over the recently proposed LSC decoders.
The performance of short polar codes under successive cancellation (SC) and SC list (SCL) decoding is analyzed for the case where the decoder messages are coarsely quantized. This setting is of particular interest for applications requiring low-complexity energy-efficient transceivers (e.g., internet-of-things or wireless sensor networks). We focus on the extreme case where the decoder messages are quantized with 3 levels. We show how under SCL decoding quantized log-likelihood ratios lead to a large inaccuracy in the calculation of path metrics, resulting in considerable performance losses with respect to an unquantized SCL decoder. We then introduce two novel techniques which improve the performance of SCL decoding with coarse quantization. The first technique consists of a modification of the final decision step of SCL decoding, where the selected codeword is the one maximizing the maximum-likelihood decoding metric within the final list. The second technique relies on statistical knowledge about the reliability of the bit estimates, obtained through a suitably modified density evolution analysis, to improve the list construction phase, yielding a higher probability of having the transmitted codeword in the list. The effectiveness of the two techniques is demonstrated through simulations.
Polar codes represent one of the major recent breakthroughs in coding theory and, because of their attractive features, they have been selected for the incoming 5G standard. As such, a lot of attention has been devoted to the development of decoding algorithms with good error performance and efficient hardware implementation. One of the leading candidates in this regard is represented by successive-cancellation list (SCL) decoding. However, its hardware implementation requires a large amount of memory. Recently, a partitioned SCL (PSCL) decoder has been proposed to significantly reduce the memory consumption. In this paper, we examine the paradigm of PSCL decoding from both theoretical and practical standpoints: (i) by changing the construction of the code, we are able to improve the performance at no additional computational, latency or memory cost, (ii) we present an optimal scheme to allocate cyclic redundancy checks (CRCs), and (iii) we provide an upper bound on the list size that allows MAP performance.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا