ترغب بنشر مسار تعليمي؟ اضغط هنا

Successive Cancellation Decoding of Single Parity-Check Product Codes

147   0   0.0 ( 0 )
 نشر من قبل Mustafa Cemil Coskun
 تاريخ النشر 2017
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We introduce successive cancellation (SC) decoding of product codes (PCs) with single parity-check (SPC) component codes. Recursive formulas are derived, which resemble the SC decoding algorithm of polar codes. We analyze the error probability of SPC-PCs over the binary erasure channel under SC decoding. A bridge with the analysis of PCs introduced by Elias in 1954 is also established. Furthermore, bounds on the block error probability under SC decoding are provided, and compared to the bounds under the original decoding algorithm proposed by Elias. It is shown that SC decoding of SPC-PCs achieves a lower block error probability than Elias decoding.



قيم البحث

اقرأ أيضاً

A product code with single parity-check component codes can be described via the tools of a multi-kernel polar code, where the rows of the generator matrix are chosen according to the constraints imposed by the product code construction. Following th is observation, successive cancellation decoding of such codes is introduced. In particular, the error probability of single parity-check product codes over binary memoryless symmetric channels under successive cancellation decoding is characterized. A bridge with the analysis of product codes introduced by Elias is also established for the binary erasure channel. Successive cancellation list decoding of single parity-check product codes is then described. For the provided example, simulations over the binary input additive white Gaussian channel show that successive cancellation list decoding outperforms belief propagation decoding applied to the code graph. Finally, the performance of the concatenation of a product code with a high-rate outer code is investigated via distance spectrum analysis. Examples of concatenations performing within $0.7$ dB from the random coding union bound are provided.
Polar codes are a class of channel capacity achieving codes that has been selected for the next generation of wireless communication standards. Successive-cancellation (SC) is the first proposed decoding algorithm, suffering from mediocre error-corre ction performance at moderate code length. In order to improve the error-correction performance of SC, two approaches are available: (i) SC-List decoding which keeps a list of candidates by running a number of SC decoders in parallel, thus increasing the implementation complexity, and (ii) SC-Flip decoding that relies on a single SC module, and keeps the computational complexity close to SC. In this work, we propose the partitioned SC-Flip (PSCF) decoding algorithm, which outperforms SC-Flip in terms of error-correction performance and average computational complexity, leading to higher throughput and reduced energy consumption per codeword. We also introduce a partitioning scheme that best suits our PSCF decoder. Simulation results show that at equivalent frame error rate, PSCF has up to $5 times$ less computational complexity than the SC-Flip decoder. At equivalent average number of iterations, the error-correction performance of PSCF outperforms SC-Flip by up to $0.15$ dB at frame error rate of $10^{-3}$.
A deep-learning-aided successive-cancellation list (DL-SCL) decoding algorithm for polar codes is introduced with deep-learning-aided successive-cancellation (DL-SC) decoding being a specific case of it. The DL-SCL decoder works by allowing additiona l rounds of SCL decoding when the first SCL decoding attempt fails, using a novel bit-flipping metric. The proposed bit-flipping metric exploits the inherent relations between the information bits in polar codes that are represented by a correlation matrix. The correlation matrix is then optimized using emerging deep-learning techniques. Performance results on a polar code of length 128 with 64 information bits concatenated with a 24-bit cyclic redundancy check show that the proposed bit-flipping metric in the proposed DL-SCL decoder requires up to 66% fewer multiplications and up to 36% fewer additions, without any need to perform transcendental functions, and by providing almost the same error-correction performance in comparison with the state of the art.
Fast SC decoding overcomes the latency caused by the serial nature of the SC decoding by identifying new nodes in the upper levels of the SC decoding tree and implementing their fast parallel decoders. In this work, we first present a novel sequence repetition node corresponding to a particular class of bit sequences. Most existing special node types are special cases of the proposed sequence repetition node. Then, a fast parallel decoder is proposed for this class of node. To further speed up the decoding process of general nodes outside this class, a threshold-based hard-decision-aided scheme is introduced. The threshold value that guarantees a given error-correction performance in the proposed scheme is derived theoretically. Analysis and hardware implementation results on a polar code of length $1024$ with code rates $1/4$, $1/2$, and $3/4$ show that our proposed algorithm reduces the required clock cycles by up to $8%$, and leads to a $10%$ improvement in the maximum operating frequency compared to state-of-the-art decoders without tangibly altering the error-correction performance. In addition, using the proposed threshold-based hard-decision-aided scheme, the decoding latency can be further reduced by $57%$ at $mathrm{E_b}/mathrm{N_0} = 5.0$~dB.
This work analyzes the latency of the simplified successive cancellation (SSC) decoding scheme for polar codes proposed by Alamdar-Yazdi and Kschischang. It is shown that, unlike conventional successive cancellation decoding, where latency is linear in the block length, the latency of SSC decoding is sublinear. More specifically, the latency of SSC decoding is $O(N^{1-1/mu})$, where $N$ is the block length and $mu$ is the scaling exponent of the channel, which captures the speed of convergence of the rate to capacity. Numerical results demonstrate the tightness of the bound and show that most of the latency reduction arises from the parallel decoding of subcodes of rate $0$ or $1$.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا