No Arabic abstract
In this paper, ensembles of quasi-cyclic moderate-density parity-check (MDPC) codes based on protographs are introduced and analyzed in the context of a McEliece-like cryptosystem. The proposed ensembles significantly improve the error correction capability of the regular MDPC code ensembles that are currently considered for post-quantum cryptosystems without increasing the public key size. The proposed ensembles are analyzed in the asymptotic setting via density evolution, both under the sum-product algorithm and a low-complexity (error-and-erasure) message passing algorithm. The asymptotic analysis is complemented at finite block lengths by Monte Carlo simulations. The enhanced error correction capability remarkably improves the scheme robustness with respect to (known) decoding attacks.
Two variations of the McEliece cryptosystem are presented. The first one is based on a relaxation of the column permutation in the classical McEliece scrambling process. This is done in such a way that the Hamming weight of the error, added in the encryption process, can be controlled so that efficient decryption remains possible. The second variation is based on the use of spatially coupled moderate-density parity-check codes as secret codes. These codes are known for their excellent error-correction performance and allow for a relatively low key size in the cryptosystem. For both variants the security with respect to known attacks is discussed.
Recently, it has been shown how McEliece public-key cryptosystems based on moderate-density parity-check (MDPC) codes allow for very compact keys compared to variants based on other code families. In this paper, classical (iterative) decoding schemes for MPDC codes are considered. The algorithms are analyzed with respect to their error-correction capability as well as their resilience against a recently proposed reaction-based key-recovery attack on a variant of the MDPC-McEliece cryptosystem by Guo, Johansson and Stankovski (GJS). New message-passing decoding algorithms are presented and analyzed. Two proposed decoding algorithms have an improved error-correction performance compared to existing hard-decision decoding schemes and are resilient against the GJS reaction-based attack for an appropriate choice of the algorithms parameters. Finally, a modified belief propagation decoding algorithm that is resilient against the GJS reaction-based attack is presented.
Neural decoders were introduced as a generalization of the classic Belief Propagation (BP) decoding algorithms, where the Trellis graph in the BP algorithm is viewed as a neural network, and the weights in the Trellis graph are optimized by training the neural network. In this work, we propose a novel neural decoder for cyclic codes by exploiting their cyclically invariant property. More precisely, we impose a shift invariant structure on the weights of our neural decoder so that any cyclic shift of inputs results in the same cyclic shift of outputs. Extensive simulations with BCH codes and punctured Reed-Muller (RM) codes show that our new decoder consistently outperforms previous neural decoders when decoding cyclic codes. Finally, we propose a list decoding procedure that can significantly reduce the decoding error probability for BCH codes and punctured RM codes. For certain high-rate codes, the gap between our list decoder and the Maximum Likelihood decoder is less than $0.1$dB. Code available at https://github.com/cyclicallyneuraldecoder/CyclicallyEquivariantNeuralDecoders
We study orbit codes in the field extension ${mathbb F}_{q^n}$. First we show that the automorphism group of a cyclic orbit code is contained in the normalizer of the Singer subgroup if the orbit is generated by a subspace that is not contained in a proper subfield of ${mathbb F}_{q^n}$. We then generalize to orbits under the normalizer of the Singer subgroup. In that situation some exceptional cases arise and some open cases remain. Finally we characterize linear isometries between such codes.
The recent development of deep learning methods provides a new approach to optimize the belief propagation (BP) decoding of linear codes. However, the limitation of existing works is that the scale of neural networks increases rapidly with the codelength, thus they can only support short to moderate codelengths. From the point view of practicality, we propose a high-performance neural min-sum (MS) decoding method that makes full use of the lifting structure of protograph low-density parity-check (LDPC) codes. By this means, the size of the parameter array of each layer in the neural decoder only equals the number of edge-types for arbitrary codelengths. In particular, for protograph LDPC codes, the proposed neural MS decoder is constructed in a special way such that identical parameters are shared by a bundle of edges derived from the same edge-type. To reduce the complexity and overcome the vanishing gradient problem in training the proposed neural MS decoder, an iteration-by-iteration (i.e., layer-by-layer in neural networks) greedy training method is proposed. With this, the proposed neural MS decoder tends to be optimized with faster convergence, which is aligned with the early termination mechanism widely used in practice. To further enhance the generalization ability of the proposed neural MS decoder, a codelength/rate compatible training method is proposed, which randomly selects samples from a set of codes lifted from the same base code. As a theoretical performance evaluation tool, a trajectory-based extrinsic information transfer (T-EXIT) chart is developed for various decoders. Both T-EXIT and simulation results show that the optimized MS decoding can provide faster convergence and up to 1dB gain compared with the plain MS decoding and its variants with only slightly increased complexity. In addition, it can even outperform the sum-product algorithm for some short codes.