No Arabic abstract
Neural decoders were introduced as a generalization of the classic Belief Propagation (BP) decoding algorithms, where the Trellis graph in the BP algorithm is viewed as a neural network, and the weights in the Trellis graph are optimized by training the neural network. In this work, we propose a novel neural decoder for cyclic codes by exploiting their cyclically invariant property. More precisely, we impose a shift invariant structure on the weights of our neural decoder so that any cyclic shift of inputs results in the same cyclic shift of outputs. Extensive simulations with BCH codes and punctured Reed-Muller (RM) codes show that our new decoder consistently outperforms previous neural decoders when decoding cyclic codes. Finally, we propose a list decoding procedure that can significantly reduce the decoding error probability for BCH codes and punctured RM codes. For certain high-rate codes, the gap between our list decoder and the Maximum Likelihood decoder is less than $0.1$dB. Code available at https://github.com/cyclicallyneuraldecoder/CyclicallyEquivariantNeuralDecoders
The cyclically equivariant neural decoder was recently proposed in [Chen-Ye, International Conference on Machine Learning, 2021] to decode cyclic codes. In the same paper, a list decoding procedure was also introduced for two widely used classes of cyclic codes -- BCH codes and punctured Reed-Muller (RM) codes. While the list decoding procedure significantly improves the Frame Error Rate (FER) of the cyclically equivariant neural decoder, the Bit Error Rate (BER) of the list decoding procedure is even worse than the unique decoding algorithm when the list size is small. In this paper, we propose an improved version of the list decoding algorithm for BCH codes and punctured RM codes. Our new proposal significantly reduces the BER while maintaining the same (in some cases even smaller) FER. More specifically, our new decoder provides up to $2$dB gain over the previous list decoder when measured by BER, and the running time of our new decoder is $15%$ smaller. Code available at https://github.com/improvedlistdecoder/code
Tail-biting convolutional codes extend the classical zero-termination convolutional codes: Both encoding schemes force the equality of start and end states, but under the tail-biting each state is a valid termination. This paper proposes a machine-learning approach to improve the state-of-the-art decoding of tail-biting codes, focusing on the widely employed short length regime as in the LTE standard. This standard also includes a CRC code. First, we parameterize the circular Viterbi algorithm, a baseline decoder that exploits the circular nature of the underlying trellis. An ensemble combines multiple such weighted decoders, each decoder specializes in decoding words from a specific region of the channel words distribution. A region corresponds to a subset of termination states; the ensemble covers the entire states space. A non-learnable gating satisfies two goals: it filters easily decoded words and mitigates the overhead of executing multiple weighted decoders. The CRC criterion is employed to choose only a subset of experts for decoding purpose. Our method achieves FER improvement of up to 0.75dB over the CVA in the waterfall region for multiple code lengths, adding negligible computational complexity compared to the circular Viterbi algorithm in high SNRs.
We study orbit codes in the field extension ${mathbb F}_{q^n}$. First we show that the automorphism group of a cyclic orbit code is contained in the normalizer of the Singer subgroup if the orbit is generated by a subspace that is not contained in a proper subfield of ${mathbb F}_{q^n}$. We then generalize to orbits under the normalizer of the Singer subgroup. In that situation some exceptional cases arise and some open cases remain. Finally we characterize linear isometries between such codes.
The distance distribution of a code is the vector whose $i^text{th}$ entry is the number of pairs of codewords with distance $i$. We investigate the structure of the distance distribution for cyclic orbit codes, which are subspace codes generated by the action of $mathbb{F}_{q^n}^*$ on an $mathbb{F}_q$-subspace $U$ of $mathbb{F}_{q^n}$. We show that for optimal full-length orbit codes the distance distribution depends only on $q,,n$, and the dimension of $U$. For full-length orbit codes with lower minimum distance, we provide partial results towards a characterization of the distance distribution, especially in the case that any two codewords intersect in a space of dimension at most 2. Finally, we briefly address the distance distribution of a union of optimal full-length orbit codes.
In this paper, ensembles of quasi-cyclic moderate-density parity-check (MDPC) codes based on protographs are introduced and analyzed in the context of a McEliece-like cryptosystem. The proposed ensembles significantly improve the error correction capability of the regular MDPC code ensembles that are currently considered for post-quantum cryptosystems without increasing the public key size. The proposed ensembles are analyzed in the asymptotic setting via density evolution, both under the sum-product algorithm and a low-complexity (error-and-erasure) message passing algorithm. The asymptotic analysis is complemented at finite block lengths by Monte Carlo simulations. The enhanced error correction capability remarkably improves the scheme robustness with respect to (known) decoding attacks.