No Arabic abstract
The cyclically equivariant neural decoder was recently proposed in [Chen-Ye, International Conference on Machine Learning, 2021] to decode cyclic codes. In the same paper, a list decoding procedure was also introduced for two widely used classes of cyclic codes -- BCH codes and punctured Reed-Muller (RM) codes. While the list decoding procedure significantly improves the Frame Error Rate (FER) of the cyclically equivariant neural decoder, the Bit Error Rate (BER) of the list decoding procedure is even worse than the unique decoding algorithm when the list size is small. In this paper, we propose an improved version of the list decoding algorithm for BCH codes and punctured RM codes. Our new proposal significantly reduces the BER while maintaining the same (in some cases even smaller) FER. More specifically, our new decoder provides up to $2$dB gain over the previous list decoder when measured by BER, and the running time of our new decoder is $15%$ smaller. Code available at https://github.com/improvedlistdecoder/code
Neural decoders were introduced as a generalization of the classic Belief Propagation (BP) decoding algorithms, where the Trellis graph in the BP algorithm is viewed as a neural network, and the weights in the Trellis graph are optimized by training the neural network. In this work, we propose a novel neural decoder for cyclic codes by exploiting their cyclically invariant property. More precisely, we impose a shift invariant structure on the weights of our neural decoder so that any cyclic shift of inputs results in the same cyclic shift of outputs. Extensive simulations with BCH codes and punctured Reed-Muller (RM) codes show that our new decoder consistently outperforms previous neural decoders when decoding cyclic codes. Finally, we propose a list decoding procedure that can significantly reduce the decoding error probability for BCH codes and punctured RM codes. For certain high-rate codes, the gap between our list decoder and the Maximum Likelihood decoder is less than $0.1$dB. Code available at https://github.com/cyclicallyneuraldecoder/CyclicallyEquivariantNeuralDecoders
Recently, it was shown that if multiplicative weights are assigned to the edges of a Tanner graph used in belief propagation decoding, it is possible to use deep learning techniques to find values for the weights which improve the error-correction performance of the decoder. Unfortunately, this approach requires many multiplications, which are generally expensive operations. In this paper, we suggest a more hardware-friendly approach in which offset min-sum decoding is augmented with learnable offset parameters. Our method uses no multiplications and has a parameter count less than half that of the multiplicative algorithm. This both speeds up training and provides a feasible path to hardware architectures. After describing our method, we compare the performance of the two neural decoding algorithms and show that our method achieves error-correction performance within 0.1 dB of the multiplicative approach and as much as 1 dB better than traditional belief propagation for the codes under consideration.
In this work, we propose extreme compression techniques like binarization, ternarization for Neural Decoders such as TurboAE. These methods reduce memory and computation by a factor of 64 with a performance better than the quantized (with 1-bit or 2-bits) Neural Decoders. However, because of the limited representation capability of the Binary and Ternary networks, the performance is not as good as the real-valued decoder. To fill this gap, we further propose to ensemble 4 such weak performers to deploy in the edge to achieve a performance similar to the real-valued network. These ensemble decoders give 16 and 64 times saving in memory and computation respectively and help to achieve performance similar to real-valued TurboAE.
We consider families of codes obtained by lifting a base code $mathcal{C}$ through operations such as $k$-XOR applied to local views of codewords of $mathcal{C}$, according to a suitable $k$-uniform hypergraph. The $k$-XOR operation yields the direct sum encoding used in works of [Ta-Shma, STOC 2017] and [Dinur and Kaufman, FOCS 2017]. We give a general framework for list decoding such lifted codes, as long as the base code admits a unique decoding algorithm, and the hypergraph used for lifting satisfies certain expansion properties. We show that these properties are satisfied by the collection of length $k$ walks on an expander graph, and by hypergraphs corresponding to high-dimensional expanders. Instantiating our framework, we obtain list decoding algorithms for direct sum liftings on the above hypergraph families. Using known connections between direct sum and direct product, we also recover the recent results of Dinur et al. [SODA 2019] on list decoding for direct product liftings. Our framework relies on relaxations given by the Sum-of-Squares (SOS) SDP hierarchy for solving various constraint satisfaction problems (CSPs). We view the problem of recovering the closest codeword to a given word, as finding the optimal solution of a CSP. Constraints in the instance correspond to edges of the lifting hypergraph, and the solutions are restricted to lie in the base code $mathcal{C}$. We show that recent algorithms for (approximately) solving CSPs on certain expanding hypergraphs also yield a decoding algorithm for such lifted codes. We extend the framework to list decoding, by requiring the SOS solution to minimize a convex proxy for negative entropy. We show that this ensures a covering property for the SOS solution, and the condition and round approach used in several SOS algorithms can then be used to recover the required list of codewords.
This paper identifies convolutional codes (CCs) used in conjunction with a CC-specific cyclic redundancy check (CRC) code as a promising paradigm for short blocklength codes. The resulting CRC-CC concatenated code naturally permits the use of the serial list Viterbi decoding (SLVD) to achieve maximum-likelihood decoding. The CC of interest is of rate-$1/omega$ and is either zero-terminated (ZT) or tail-biting (TB). For CRC-CC concatenated code designs, we show how to find the optimal CRC polynomial for a given ZTCC or TBCC. Our complexity analysis reveals that SLVD decoding complexity is a function of the terminating list rank, which converges to one at high SNR. This behavior allows the performance gains of SLVD to be achieved with a small increase in average complexity at the SNR operating point of interest. With a sufficiently large CC constraint length, the performance of CRC-CC concatenated code under SLVD approaches the random-coding union (RCU) bound as the CRC size is increased while average decoding complexity does not increase significantly. TB encoding further reduces the backoff from the RCU bound by avoiding the termination overhead. As a result, several CRC-TBCC codes outperform the RCU bound at moderate SNR values while permitting decoding with relatively low complexity.