Do you want to publish a course? Click here

Sphere Constraint based Enumeration Methods to Analyze the Minimum Weight Distribution of Polar Codes

84   0   0.0 ( 0 )
 Added by Jinnan Piao
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

In this paper, the minimum weight distributions (MWDs) of polar codes and concatenated polar codes are exactly enumerated according to the distance property of codewords. We first propose a sphere constraint based enumeration method (SCEM) to analyze the MWD of polar codes with moderate complexity. The SCEM exploits the distance property that all the codewords with the identical Hamming weight are distributed on a spherical shell. Then, based on the SCEM and the Plotkins construction of polar codes, a sphere constraint based recursive enumeration method (SCREM) is proposed to recursively calculate the MWD with a lower complexity. Finally, we propose a parity-check SCEM (PC-SCEM) to analyze the MWD of concatenated polar codes by introducing the parity-check equations of outer codes. Moreover, due to the distance property of codewords, the proposed three methods can exactly enumerate all the codewords belonging to the MWD. The enumeration results show that the SCREM can enumerate the MWD of polar codes with code length up to $2^{14}$ and the PC-SCEM can be used to optimize CRC-polar concatenated codes.



rate research

Read More

Polar code, with explicit construction and recursive structure, is the latest breakthrough in channel coding field for its low-complexity and theoretically capacity-achieving property. Since polar codes can approach the maximum likelihood performance under successive cancellation list decoding (SCLD), its decoding performance can be evaluated by Bonferroni-type bounds (e.g., union bound) in which the Hamming weight spectrum will be used. Especially, the polar codewords with minimum Hamming weight (PC-MHW) are the most important item in that bound because they make major contributions to the decoding error pattern particularly at high signal-to-noise-ratio. In this work, we propose an efficient strategy for enumerating the PC-MHW and its number. By reviewing the inherent reason that PC-MHW can be generated by SCLD, we obtain some common features of PC-MHW captured by SCLD. Using these features, we introduce a concept of zero-capacity bit-channel to obtain a tight upper bound for the number of PC-MHW, whose computing complexity is sublinear with code length. Furthermore, we prove that the proposed upper bound is really the exact number of PC-MHW in most cases. Guided by the bound and its theoretical analysis, we devise an efficient SCLD-based method to enumerate PC-MHW, which requires less than half of the list size compared with the existing methods.
Polar codes are a class of linear block codes that provably achieves channel capacity, and have been selected as a coding scheme for $5^{rm th}$ generation wireless communication standards. Successive-cancellation (SC) decoding of polar codes has mediocre error-correction performance on short to moderate codeword lengths: the SC-Flip decoding algorithm is one of the solutions that have been proposed to overcome this issue. On the other hand, SC-Flip has a higher implementation complexity compared to SC due to the required log-likelihood ratio (LLR) selection and sorting process. Moreover, it requires a high number of iterations to reach good error-correction performance. In this work, we propose two techniques to improve the SC-Flip decoding algorithm for low-rate codes, based on the observation of channel-induced error distributions. The first one is a fixed index selection (FIS) scheme to avoid the substantial implementation cost of LLR selection and sorting with no cost on error-correction performance. The second is an enhanced index selection (EIS) criterion to improve the error-correction performance of SC-Flip decoding. A reduction of $24.6%$ in the implementation cost of logic elements is estimated with the FIS approach, while simulation results show that EIS leads to an improvement on error-correction performance improvement up to $0.42$ dB at a target FER of $10^{-4}$.
In 1997, Shor and Laflamme defined the weight enumerators for quantum error-correcting codes and derived a MacWilliams identity. We extend their work by introducing our double weight enumerators and complete weight enumerators. The MacWilliams identities for these enumerators can be obtained similarly. With the help of MacWilliams identities, we obtain various bounds for asymmetric quantum codes.
79 - Kai Niu , Jincheng Dai , Kai Chen 2016
Polar codes are the first class of constructive channel codes achieving the symmetric capacity of the binary-input discrete memoryless channels. But the corresponding code length is limited to the power of two. In this paper, we establish a systematic framework to design the rate-compatible punctured polar (RCPP) codes with arbitrary code length. A new theoretic tool, called polar spectra, is proposed to count the number of paths on the code tree with the same number of zeros or ones respectively. Furthermore, a spectrum distance SD0 (SD1) and a joint spectrum distance (JSD) are presented as performance criteria to optimize the puncturing tables. For the capacity-zero puncturing mode (punctured bits are unknown to the decoder), we propose a quasi-uniform puncturing algorithm, analyze the number of equivalent puncturings and prove that this scheme can maximize SD1 and JSD. Similarly, for the capacity-one mode (punctured bits are known to the decoder), we also devise a reversal quasi-uniform puncturing scheme and prove that it has the maximum SD0 and JSD. Both schemes have a universal puncturing table without any exhausted search. These optimal RCPP codes outperform the performance of turbo codes in LTE wireless communication systems.
In this paper, we show some applications of algebraic curves to the construction of kernels of polar codes over a discrete memoryless channel which is symmetric w.r.t the field operations. We will also study the minimum distance of the polar codes proposed, their duals and the exponents of the matrices used for defining them. All the restrictions that we make to our curves will be accomplished by the so-called Castle Curves.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا