ترغب بنشر مسار تعليمي؟ اضغط هنا

Approaching the Finite Blocklength Capacity within 0.025dB by Short Polar Codes and CRC-Aided Hybrid Decoding

70   0   0.0 ( 0 )
 نشر من قبل Jinnan Piao
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In this letter, we explore the performance limits of short polar codes and find that the maximum likelihood (ML) performance of a simple CRC-polar concatenated scheme can approach the finite blocklength capacity. Then, in order to approach the ML performance with a low average complexity, a CRC-aided hybrid decoding (CA-HD) algorithm is proposed and its decoding process is divided into two steps. In the first step, the received sequence is decoded by the adaptive successive cancellation list (ADSCL) decoding. In the second step, CRC-aided sphere decoding with a reasonable initial radius is used to decode the received sequence. To obtain the reasonable radius, the CRC bits of the survival paths in ADSCL are recalculated and the minimum Euclidean distance between the survival path and the received sequence is chosen as the initial radius. The simulation results show that CA-HD can achieve within about $0.025$dB of the finite blocklength capacity at the block error ratio $10^{-3}$ with code length $128$ and code rate $1/2$.



قيم البحث

اقرأ أيضاً

This paper identifies convolutional codes (CCs) used in conjunction with a CC-specific cyclic redundancy check (CRC) code as a promising paradigm for short blocklength codes. The resulting CRC-CC concatenated code naturally permits the use of the ser ial list Viterbi decoding (SLVD) to achieve maximum-likelihood decoding. The CC of interest is of rate-$1/omega$ and is either zero-terminated (ZT) or tail-biting (TB). For CRC-CC concatenated code designs, we show how to find the optimal CRC polynomial for a given ZTCC or TBCC. Our complexity analysis reveals that SLVD decoding complexity is a function of the terminating list rank, which converges to one at high SNR. This behavior allows the performance gains of SLVD to be achieved with a small increase in average complexity at the SNR operating point of interest. With a sufficiently large CC constraint length, the performance of CRC-CC concatenated code under SLVD approaches the random-coding union (RCU) bound as the CRC size is increased while average decoding complexity does not increase significantly. TB encoding further reduces the backoff from the RCU bound by avoiding the termination overhead. As a result, several CRC-TBCC codes outperform the RCU bound at moderate SNR values while permitting decoding with relatively low complexity.
103 - Yuejun Wei , Ming Jiang , Wen Chen 2020
Turbo codes and CRC codes are usually decoded separately according to the serially concatenated inner codes and outer codes respectively. In this letter, we propose a hybrid decoding algorithm of turbo-CRC codes, where the outer codes, CRC codes, are not used for error detection but as an assistance to improve the error correction performance. Two independent iterative decoding and reliability based decoding are carried out in a hybrid schedule, which can efficiently decode the two different codes as an entire codeword. By introducing an efficient error detecting method based on normalized Euclidean distance without CRC check, significant gain can be obtained by using the hybrid decoding method without loss of the error detection ability.
A deep-learning-aided successive-cancellation list (DL-SCL) decoding algorithm for polar codes is introduced with deep-learning-aided successive-cancellation (DL-SC) decoding being a specific case of it. The DL-SCL decoder works by allowing additiona l rounds of SCL decoding when the first SCL decoding attempt fails, using a novel bit-flipping metric. The proposed bit-flipping metric exploits the inherent relations between the information bits in polar codes that are represented by a correlation matrix. The correlation matrix is then optimized using emerging deep-learning techniques. Performance results on a polar code of length 128 with 64 information bits concatenated with a 24-bit cyclic redundancy check show that the proposed bit-flipping metric in the proposed DL-SCL decoder requires up to 66% fewer multiplications and up to 36% fewer additions, without any need to perform transcendental functions, and by providing almost the same error-correction performance in comparison with the state of the art.
Polar codes represent one of the major recent breakthroughs in coding theory and, because of their attractive features, they have been selected for the incoming 5G standard. As such, a lot of attention has been devoted to the development of decoding algorithms with good error performance and efficient hardware implementation. One of the leading candidates in this regard is represented by successive-cancellation list (SCL) decoding. However, its hardware implementation requires a large amount of memory. Recently, a partitioned SCL (PSCL) decoder has been proposed to significantly reduce the memory consumption. In this paper, we examine the paradigm of PSCL decoding from both theoretical and practical standpoints: (i) by changing the construction of the code, we are able to improve the performance at no additional computational, latency or memory cost, (ii) we present an optimal scheme to allocate cyclic redundancy checks (CRCs), and (iii) we provide an upper bound on the list size that allows MAP performance.
Polar codes are a class of channel capacity achieving codes that has been selected for the next generation of wireless communication standards. Successive-cancellation (SC) is the first proposed decoding algorithm, suffering from mediocre error-corre ction performance at moderate code length. In order to improve the error-correction performance of SC, two approaches are available: (i) SC-List decoding which keeps a list of candidates by running a number of SC decoders in parallel, thus increasing the implementation complexity, and (ii) SC-Flip decoding that relies on a single SC module, and keeps the computational complexity close to SC. In this work, we propose the partitioned SC-Flip (PSCF) decoding algorithm, which outperforms SC-Flip in terms of error-correction performance and average computational complexity, leading to higher throughput and reduced energy consumption per codeword. We also introduce a partitioning scheme that best suits our PSCF decoder. Simulation results show that at equivalent frame error rate, PSCF has up to $5 times$ less computational complexity than the SC-Flip decoder. At equivalent average number of iterations, the error-correction performance of PSCF outperforms SC-Flip by up to $0.15$ dB at frame error rate of $10^{-3}$.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا