Do you want to publish a course? Click here

Universal Codes from Switching Strategies

88   0   0.0 ( 0 )
 Added by Wouter Koolen
 Publication date 2013
and research's language is English




Ask ChatGPT about the research

We discuss algorithms for combining sequential prediction strategies, a task which can be viewed as a natural generalisation of the concept of universal coding. We describe a graphical language based on Hidden Markov Models for defining prediction strategies, and we provide both existing and new models as examples. The models include efficient, parameterless models for switching between the input strategies over time, including a model for the case where switches tend to occur in clusters, and finally a new model for the scenario where the prediction strategies have a known relationship, and where jumps are typically between strongly related ones. This last model is relevant for coding time series data where parameter drift is expected. As theoretical ontributions we introduce an interpolation construction that is useful in the development and analysis of new algorithms, and we establish a new sophisticated lemma for analysing the individual sequence regret of parameterised models.



rate research

Read More

A new deep-neural-network (DNN) based error correction encoder architecture for channels with feedback, called Deep Extended Feedback (DEF), is presented in this paper. The encoder in the DEF architecture transmits an information message followed by a sequence of parity symbols which are generated based on the message as well as the observations of the past forward channel outputs sent to the transmitter through a feedback channel. DEF codes generalize Deepcode [1] in several ways: parity symbols are generated based on forward-channel output observations over longer time intervals in order to provide better error correction capability; and high-order modulation formats are deployed in the encoder so as to achieve increased spectral efficiency. Performance evaluations show that DEF codes have better performance compared to other DNN-based codes for channels with feedback.
We consider an agent interacting with an unmodeled environment. At each time, the agent makes an observation, takes an action, and incurs a cost. Its actions can influence future observations and costs. The goal is to minimize the long-term average cost. We propose a novel algorithm, known as the active LZ algorithm, for optimal control based on ideas from the Lempel-Ziv scheme for universal data compression and prediction. We establish that, under the active LZ algorithm, if there exists an integer $K$ such that the future is conditionally independent of the past given a window of $K$ consecutive actions and observations, then the average cost converges to the optimum. Experimental results involving the game of Rock-Paper-Scissors illustrate merits of the algorithm.
High quality data is essential in deep learning to train a robust model. While in other fields data is sparse and costly to collect, in error decoding it is free to query and label thus allowing potential data exploitation. Utilizing this fact and inspired by active learning, two novel methods are introduced to improve Weighted Belief Propagation (WBP) decoding. These methods incorporate machine-learning concepts with error decoding measures. For BCH(63,36), (63,45) and (127,64) codes, with cycle-reduced parity-check matrices, improvement of up to 0.4dB at the waterfall region, and of up to 1.5dB at the errorfloor region in FER, over the original WBP, is demonstrated by smartly sampling the data, without increasing inference (decoding) complexity. The proposed methods constitutes an example guidelines for model enhancement by incorporation of domain knowledge from error-correcting field into a deep learning model. These guidelines can be adapted to any other deep learning based communication block.
We study Algebraic Geometry codes producing quantum error-correcting codes by the CSS construction. We pay particular attention to the family of Castle codes. We show that many of the examples known in the literature in fact belong to this family of codes. We systematize these constructions by showing the common theory that underlies all of them.
97 - Xiangyu Chen , Min Ye 2021
Neural decoders were introduced as a generalization of the classic Belief Propagation (BP) decoding algorithms, where the Trellis graph in the BP algorithm is viewed as a neural network, and the weights in the Trellis graph are optimized by training the neural network. In this work, we propose a novel neural decoder for cyclic codes by exploiting their cyclically invariant property. More precisely, we impose a shift invariant structure on the weights of our neural decoder so that any cyclic shift of inputs results in the same cyclic shift of outputs. Extensive simulations with BCH codes and punctured Reed-Muller (RM) codes show that our new decoder consistently outperforms previous neural decoders when decoding cyclic codes. Finally, we propose a list decoding procedure that can significantly reduce the decoding error probability for BCH codes and punctured RM codes. For certain high-rate codes, the gap between our list decoder and the Maximum Likelihood decoder is less than $0.1$dB. Code available at https://github.com/cyclicallyneuraldecoder/CyclicallyEquivariantNeuralDecoders

suggested questions

comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا