ترغب بنشر مسار تعليمي؟ اضغط هنا

A PDD Decoder for Binary Linear Codes With Neural Check Polytope Projection

207   0   0.0 ( 0 )
 نشر من قبل Yi Wei
 تاريخ النشر 2020
والبحث باللغة English




اسأل ChatGPT حول البحث

Linear Programming (LP) is an important decoding technique for binary linear codes. However, the advantages of LP decoding, such as low error floor and strong theoretical guarantee, etc., come at the cost of high computational complexity and poor performance at the low signal-to-noise ratio (SNR) region. In this letter, we adopt the penalty dual decomposition (PDD) framework and propose a PDD algorithm to address the fundamental polytope based maximum likelihood (ML) decoding problem. Furthermore, we propose to integrate machine learning techniques into the most time-consuming part of the PDD decoding algorithm, i.e., check polytope projection (CPP). Inspired by the fact that a multi-layer perception (MLP) can theoretically approximate any nonlinear mapping function, we present a specially designed neural CPP (NCPP) algorithm to decrease the decoding latency. Simulation results demonstrate the effectiveness of the proposed algorithms.



قيم البحث

اقرأ أيضاً

154 - Weihong Xu 2018
Recently, deep learning methods have shown significant improvements in communication systems. In this paper, we study the equalization problem over the nonlinear channel using neural networks. The joint equalizer and decoder based on neural networks are proposed to realize blind equalization and decoding process without the knowledge of channel state information (CSI). Different from previous methods, we use two neural networks instead of one. First, convolutional neural network (CNN) is used to adaptively recover the transmitted signal from channel impairment and nonlinear distortions. Then the deep neural network decoder (NND) decodes the detected signal from CNN equalizer. Under various channel conditions, the experiment results demonstrate that the proposed CNN equalizer achieves better performance than other solutions based on machine learning methods. The proposed model reduces about $2/3$ of the parameters compared to state-of-the-art counterparts. Besides, our model can be easily applied to long sequence with $mathcal{O}(n)$ complexity.
We present an algorithm for error correction in topological codes that exploits modern machine learning techniques. Our decoder is constructed from a stochastic neural network called a Boltzmann machine, of the type extensively used in deep learning. We provide a general prescription for the training of the network and a decoding strategy that is applicable to a wide variety of stabilizer codes with very little specialization. We demonstrate the neural decoder numerically on the well-known two dimensional toric code with phase-flip errors.
Inspired by the recent advances in deep learning (DL), this work presents a deep neural network aided decoding algorithm for binary linear codes. Based on the concept of deep unfolding, we design a decoding network by unfolding the alternating direct ion method of multipliers (ADMM)-penalized decoder. In addition, we propose two improv
Quantum LDPC codes are a promising direction for low overhead quantum computing. In this paper, we propose a generalization of the Union-Find decoder as adecoder for quantum LDPC codes. We prove that this decoder corrects all errors with weight up to An^{alpha} for some A, {alpha} > 0 for different classes of quantum LDPC codes such as toric codes and hyperbolic codes in any dimension D geq 3 and quantum expander codes. To prove this result, we introduce a notion of covering radius which measures the spread of an error from its syndrome. We believe this notion could find application beyond the decoding problem. We also perform numerical simulations, which show that our Union-Find decoder outperforms the belief propagation decoder in the low error rate regime in the case of a quantum LDPC code with length 3600.
Reed-Muller (RM) codes are one of the oldest families of codes. Recently, a recursive projection aggregation (RPA) decoder has been proposed, which achieves a performance that is close to the maximum likelihood decoder for short-length RM codes. One of its main drawbacks, however, is the large amount of computations needed. In this paper, we devise a new algorithm to lower the computational budget while keeping a performance close to that of the RPA decoder. The proposed approach consists of multiple sparse RPAs that are generated by performing only a selection of projections in each sparsified decoder. In the end, a cyclic redundancy check (CRC) is used to decide between output codewords. Simulation results show that our proposed approach reduces the RPA decoders computations up to $80%$ with negligible performance loss.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا