ﻻ يوجد ملخص باللغة العربية
We present a new method for nonlinear prediction of discrete random sequences under minimal structural assumptions. We give a mathematical construction for optimal predictors of such processes, in the form of hidden Markov models. We then describe an algorithm, CSSR (Causal-State Splitting Reconstruction), which approximates the ideal predictor from data. We discuss the reliability of CSSR, its data requirements, and its performance in simulations. Finally, we compare our approach to existing methods using variablelength Markov models and cross-validated hidden Markov models, and show theoretically and experimentally that our method delivers results superior to the former and at least comparable to the latter.
Sequential learning systems are used in a wide variety of problems from decision making to optimization, where they provide a belief (opinion) to nature, and then update this belief based on the feedback (result) to minimize (or maximize) some cost o
Efficiency criteria for conformal prediction, such as emph{observed fuzziness} (i.e., the sum of p-values associated with false labels), are commonly used to emph{evaluate} the performance of given conformal predictors. Here, we investigate whether i
A modification of Koetter-Kschischang codes for random networks is presented (these codes were also studied by Wang et al. in the context of authentication problems). The new codes have higher information rate, while maintaining the same error-correc
Neural networks are surprisingly good at interpolating and perform remarkably well when the training set examples resemble those in the test set. However, they are often unable to extrapolate patterns beyond the seen data, even when the abstractions
Graph based clustering is one of the major clustering methods. Most of it work in three separate steps: similarity graph construction, clustering label relaxing and label discretization with k-means. Such common practice has three disadvantages: 1) t