ترغب بنشر مسار تعليمي؟ اضغط هنا

On decoding hyperbolic codes

102   0   0.0 ( 0 )
 نشر من قبل Hiram H. L\\'opez
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Few decoding algorithms for hyperbolic codes are known in the literature, this article tries to fill this gap. The first part of this work compares hyperbolic codes and Reed-Muller codes. In particular, we determine when a Reed-Muller code is a hyperbolic code. As a byproduct, we state when a hyperbolic code has greater dimension than a Reed-Muller code when they both have the same minimum distance. We use the previous ideas to describe how to decode a hyperbolic code using the largest Reed-Muller code contained in it, or alternatively using the smallest Reed-Muller code that contains it. A combination of these two algorithms is proposed for the case when hyperbolic codes are defined by polynomials in two variables. Then, we compare hyperbolic codes and Cube codes (tensor product of Reed-Solomon codes) and we propose decoding algorithms of hyperbolic codes based on their closest Cube codes. Finally, we adapt to hyperbolic codes the Geil and Matsumotos generalization of Sudans list decoding algorithm.



قيم البحث

اقرأ أيضاً

We address the problem of decoding Gabidulin codes beyond their unique error-correction radius. The complexity of this problem is of importance to assess the security of some rank-metric code-based cryptosystems. We propose an approach that introduce s row or column erasures to decrease the rank of the error in order to use any proper polynomial-time Gabidulin code error-erasure decoding algorithm. This approach improves on generic rank-metric decoders by an exponential factor.
142 - Cicero Carvalho 2013
We study affine cartesian codes, which are a Reed-Muller type of evaluation codes, where polynomials are evaluated at the cartesian product of n subsets of a finite field F_q. These codes appeared recently in a work by H. Lopez, C. Renteria-Marquez a nd R. Villareal and, in a generalized form, in a work by O. Geil and C. Thomsen. Using methods from Grobner basis theory we determine the second Hamming weight (also called next-to-minimal weight) for particular cases of affine cartesian codes and also some higher Hamming weights of this type of code.
High quality data is essential in deep learning to train a robust model. While in other fields data is sparse and costly to collect, in error decoding it is free to query and label thus allowing potential data exploitation. Utilizing this fact and in spired by active learning, two novel methods are introduced to improve Weighted Belief Propagation (WBP) decoding. These methods incorporate machine-learning concepts with error decoding measures. For BCH(63,36), (63,45) and (127,64) codes, with cycle-reduced parity-check matrices, improvement of up to 0.4dB at the waterfall region, and of up to 1.5dB at the errorfloor region in FER, over the original WBP, is demonstrated by smartly sampling the data, without increasing inference (decoding) complexity. The proposed methods constitutes an example guidelines for model enhancement by incorporation of domain knowledge from error-correcting field into a deep learning model. These guidelines can be adapted to any other deep learning based communication block.
An efficient decoding algorithm for horizontally u-interleaved LRPC codes is proposed and analyzed. Upper bounds on the decoding failure rate and the computational complexity of the algorithm are derived. It is shown that interleaving reduces the dec oding failure rate exponentially in the interleaving order u whereas the computational complexity grows linearly.
Dynamic successive cancellation flip (DSCF) decoding of polar codes is a powerful algorithm that can achieve the error correction performance of successive cancellation list (SCL) decoding, with a complexity that is close to that of successive cancel lation (SC) decoding at practical signal-to-noise ratio (SNR) regimes. However, DSCF decoding requires costly transcendental computations which adversely affect its implementation complexity. In this paper, we first show that a direct application of common approximation schemes on the conventional DSCF decoding results in significant error-correction performance loss. We then introduce a training parameter and propose an approximation scheme which completely removes the need to perform transcendental computations in DSCF decoding, with almost no error-correction performance degradation.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا