Do you want to publish a course? Click here

Deep Extended Feedback Codes

98   0   0.0 ( 0 )
 Added by Alberto Perotti
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

A new deep-neural-network (DNN) based error correction encoder architecture for channels with feedback, called Deep Extended Feedback (DEF), is presented in this paper. The encoder in the DEF architecture transmits an information message followed by a sequence of parity symbols which are generated based on the message as well as the observations of the past forward channel outputs sent to the transmitter through a feedback channel. DEF codes generalize Deepcode [1] in several ways: parity symbols are generated based on forward-channel output observations over longer time intervals in order to provide better error correction capability; and high-order modulation formats are deployed in the encoder so as to achieve increased spectral efficiency. Performance evaluations show that DEF codes have better performance compared to other DNN-based codes for channels with feedback.



rate research

Read More

High quality data is essential in deep learning to train a robust model. While in other fields data is sparse and costly to collect, in error decoding it is free to query and label thus allowing potential data exploitation. Utilizing this fact and inspired by active learning, two novel methods are introduced to improve Weighted Belief Propagation (WBP) decoding. These methods incorporate machine-learning concepts with error decoding measures. For BCH(63,36), (63,45) and (127,64) codes, with cycle-reduced parity-check matrices, improvement of up to 0.4dB at the waterfall region, and of up to 1.5dB at the errorfloor region in FER, over the original WBP, is demonstrated by smartly sampling the data, without increasing inference (decoding) complexity. The proposed methods constitutes an example guidelines for model enhancement by incorporation of domain knowledge from error-correcting field into a deep learning model. These guidelines can be adapted to any other deep learning based communication block.
98 - Zhiwen He , Jiejing Wen 2020
This paper is concerned with the affine-invariant ternary codes which are defined by Hermitian functions. We compute the incidence matrices of 2-designs that are supported by the minimum weight codewords of these ternary codes. The linear codes generated by the rows of these incidence matrix are subcodes of the extended codes of the 4-th order generalized Reed-Muller codes and they also hold 2-designs. Finally, we give the dimensions and lower bound of the minimum weights of these linear codes.
Projective Reed-Solomon (PRS) codes are Reed-Solomon codes of the maximum possible length q+1. The classification of deep holes --received words with maximum possible error distance-- for PRS codes is an important and difficult problem. In this paper, we use algebraic methods to explicitly construct three classes of deep holes for PRS codes. We show that these three classes completely classify all deep holes of PRS codes with redundancy at most four. Previously, the deep hole classification was only known for PRS codes with redundancy at most three in work arXiv:1612.05447
We discuss algorithms for combining sequential prediction strategies, a task which can be viewed as a natural generalisation of the concept of universal coding. We describe a graphical language based on Hidden Markov Models for defining prediction strategies, and we provide both existing and new models as examples. The models include efficient, parameterless models for switching between the input strategies over time, including a model for the case where switches tend to occur in clusters, and finally a new model for the scenario where the prediction strategies have a known relationship, and where jumps are typically between strongly related ones. This last model is relevant for coding time series data where parameter drift is expected. As theoretical ontributions we introduce an interpolation construction that is useful in the development and analysis of new algorithms, and we establish a new sophisticated lemma for analysing the individual sequence regret of parameterised models.
The famous Barnes-Wall lattices can be obtained by applying Construction D to a chain of Reed-Muller codes. By applying Construction ${{D}}^{{(cyc)}}$ to a chain of extended cyclic codes sandwiched between Reed-Muller codes, Hu and Nebe (J. London Math. Soc. (2) 101 (2020) 1068-1089) constructed new series of universally strongly perfect lattices sandwiched between Barnes-Wall lattices. In this paper, we explicitly determine the minimum weight codewords of those codes for some special cases.

suggested questions

comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا