Do you want to publish a course? Click here

Exact separation of forbidden-set cuts associated with redundant parity checks of binary linear codes

83   0   0.0 ( 0 )
 Added by Andreas Tillmann
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

In recent years, several integer programming (IP) approaches were developed for maximum-likelihood decoding and minimum distance computation for binary linear codes. Two aspects in particular have been demonstrated to improve the performance of IP solvers as well as adaptive linear programming decoders: the dynamic generation of forbidden-set (FS) inequalities, a family of valid cutting planes, and the utilization of so-called redundant parity-checks (RPCs). However, to date, it had remained unclear how to solve the exact RPC separation problem (i.e., to determine whether or not there exists any violated FS inequality w.r.t. any known or unknown parity-check). In this note, we prove NP-hardness of this problem. Moreover, we formulate an IP model that combines the search for most violated FS cuts with the generation of RPCs, and report on computational experiments. Empirically, for various instances of the minimum distance problem, it turns out that while utilizing the exact separation IP does not appear to provide a computational advantage, it can apparently be avoided altogether by combining heuristics to generate RPC-based cuts.



rate research

Read More

136 - Sascha Kurz 2020
We classify 8-divisible binary linear codes with minimum distance 24 and small length. As an application we consider the codes associated to nodal sextics with 65 ordinary double points.
We prove that, for the binary erasure channel (BEC), the polar-coding paradigm gives rise to codes that not only approach the Shannon limit but do so under the best possible scaling of their block length as a~function of the gap to capacity. This result exhibits the first known family of binary codes that attain both optimal scaling and quasi-linear complexity of encoding and decoding. Our proof is based on the construction and analysis of binary polar codes with large kernels. When communicating reliably at rates within $varepsilon > 0$ of capacity, the code length $n$ often scales as $O(1/varepsilon^{mu})$, where the constant $mu$ is called the scaling exponent. It is known that the optimal scaling exponent is $mu=2$, and it is achieved by random linear codes. The scaling exponent of conventional polar codes (based on the $2times 2$ kernel) on the BEC is $mu=3.63$. This falls far short of the optimal scaling guaranteed by random codes. Our main contribution is a rigorous proof of the following result: for the BEC, there exist $elltimesell$ binary kernels, such that polar codes constructed from these kernels achieve scaling exponent $mu(ell)$ that tends to the optimal value of $2$ as $ell$ grows. We furthermore characterize precisely how large $ell$ needs to be as a function of the gap between $mu(ell)$ and $2$. The resulting binary codes maintain the recursive structure of conventional polar codes, and thereby achieve construction complexity $O(n)$ and encoding/decoding complexity $O(nlog n)$.
An efficient decoding algorithm for horizontally u-interleaved LRPC codes is proposed and analyzed. Upper bounds on the decoding failure rate and the computational complexity of the algorithm are derived. It is shown that interleaving reduces the decoding failure rate exponentially in the interleaving order u whereas the computational complexity grows linearly.
High quality data is essential in deep learning to train a robust model. While in other fields data is sparse and costly to collect, in error decoding it is free to query and label thus allowing potential data exploitation. Utilizing this fact and inspired by active learning, two novel methods are introduced to improve Weighted Belief Propagation (WBP) decoding. These methods incorporate machine-learning concepts with error decoding measures. For BCH(63,36), (63,45) and (127,64) codes, with cycle-reduced parity-check matrices, improvement of up to 0.4dB at the waterfall region, and of up to 1.5dB at the errorfloor region in FER, over the original WBP, is demonstrated by smartly sampling the data, without increasing inference (decoding) complexity. The proposed methods constitutes an example guidelines for model enhancement by incorporation of domain knowledge from error-correcting field into a deep learning model. These guidelines can be adapted to any other deep learning based communication block.
We consider the problem of computing a binary linear transformation using unreliable components when all circuit components are unreliable. Two noise models of unreliable components are considered: probabilistic errors and permanent errors. We introduce the ENCODED technique that ensures that the error probability of the computation of the linear transformation is kept bounded below a small constant independent of the size of the linear transformation even when all logic gates in the computation are noisy. Further, we show that the scheme requires fewer operations (in order sense) than its uncoded counterpart. By deriving a lower bound, we show that in some cases, the scheme is order-optimal. Using these results, we examine the gain in energy-efficiency from use of voltage-scaling scheme where gate-energy is reduced by lowering the supply voltage. We use a gate energy-reliability model to show that tuning gate-energy appropriately at different stages of the computation (dynamic voltage scaling), in conjunction with ENCODED, can lead to order-sense energy-savings over the classical uncoded approach. Finally, we also examine the problem of computing a linear transformation when noiseless decoders can be used, providing upper and lower bounds to the problem.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا