ترغب بنشر مسار تعليمي؟ اضغط هنا

Blind Construction of Optimal Nonlinear Recursive Predictors for Discrete Sequences

112   0   0.0 ( 0 )
 نشر من قبل Cosma Shalizi
 تاريخ النشر 2014
والبحث باللغة English




اسأل ChatGPT حول البحث

We present a new method for nonlinear prediction of discrete random sequences under minimal structural assumptions. We give a mathematical construction for optimal predictors of such processes, in the form of hidden Markov models. We then describe an algorithm, CSSR (Causal-State Splitting Reconstruction), which approximates the ideal predictor from data. We discuss the reliability of CSSR, its data requirements, and its performance in simulations. Finally, we compare our approach to existing methods using variablelength Markov models and cross-validated hidden Markov models, and show theoretically and experimentally that our method delivers results superior to the former and at least comparable to the latter.



قيم البحث

اقرأ أيضاً

Sequential learning systems are used in a wide variety of problems from decision making to optimization, where they provide a belief (opinion) to nature, and then update this belief based on the feedback (result) to minimize (or maximize) some cost o r loss (conversely, utility or gain). The goal is to reach an objective by exploiting the temporal relation inherent to the natures feedback (state). By exploiting this relation, specific learning systems can be designed that perform asymptotically optimal for various applications. However, if the framework of the problem is not stationary, i.e., the natures state sometimes changes arbitrarily, the past cumulative belief revision done by the system may become useless and the system may fail if it lacks adaptivity. While this adaptivity can be directly implemented in specific cases (e.g., convex optimization), it is mostly not straightforward for general learning tasks. To this end, we propose an efficient optimal mixture framework for general sequential learning systems, which we call the recursive experts for dynamic environments. For this purpose, we design hyper-experts that incorporate the learning systems at our disposal and recursively merge in a specific way to achieve minimax optimal regret bounds up to constant factors. The multiplicative increases in computational complexity from the initial system to our adaptive system are only logarithmic-in-time factors.
Efficiency criteria for conformal prediction, such as emph{observed fuzziness} (i.e., the sum of p-values associated with false labels), are commonly used to emph{evaluate} the performance of given conformal predictors. Here, we investigate whether i t is possible to exploit efficiency criteria to emph{learn} classifiers, both conformal predictors and point classifiers, by using such criteria as training objective functions. The proposed idea is implemented for the problem of binary classification of hand-written digits. By choosing a 1-dimensional model class (with one real-valued free parameter), we can solve the optimization problems through an (approximate) exhaustive search over (a discrete version of) the parameter space. Our empirical results suggest that conformal predictors trained by minimizing their observed fuzziness perform better than conformal predictors trained in the traditional way by minimizing the emph{prediction error} of the corresponding point classifier. They also have a reasonable performance in terms of their prediction error on the test set.
126 - Vitaly Skachek 2009
A modification of Koetter-Kschischang codes for random networks is presented (these codes were also studied by Wang et al. in the context of authentication problems). The new codes have higher information rate, while maintaining the same error-correc ting capabilities. An efficient error-correcting algorithm is proposed for these codes.
Neural networks are surprisingly good at interpolating and perform remarkably well when the training set examples resemble those in the test set. However, they are often unable to extrapolate patterns beyond the seen data, even when the abstractions required for such patterns are simple. In this paper, we first review the notion of extrapolation, why it is important and how one could hope to tackle it. We then focus on a specific type of extrapolation which is especially useful for natural language processing: generalization to sequences that are longer than the training ones. We hypothesize that models with a separate content- and location-based attention are more likely to extrapolate than those with common attention mechanisms. We empirically support our claim for recurrent seq2seq models with our proposed attention on variants of the Lookup Table task. This sheds light on some striking failures of neural models for sequences and on possible methods to approaching such issues.
Graph based clustering is one of the major clustering methods. Most of it work in three separate steps: similarity graph construction, clustering label relaxing and label discretization with k-means. Such common practice has three disadvantages: 1) t he predefined similarity graph is often fixed and may not be optimal for the subsequent clustering. 2) the relaxing process of cluster labels may cause significant information loss. 3) label discretization may deviate from the real clustering result since k-means is sensitive to the initialization of cluster centroids. To tackle these problems, in this paper, we propose an effective discrete optimal graph clustering (DOGC) framework. A structured similarity graph that is theoretically optimal for clustering performance is adaptively learned with a guidance of reasonable rank constraint. Besides, to avoid the information loss, we explicitly enforce a discrete transformation on the intermediate continuous label, which derives a tractable optimization problem with discrete solution. Further, to compensate the unreliability of the learned labels and enhance the clustering accuracy, we design an adaptive robust module that learns prediction function for the unseen data based on the learned discrete cluster labels. Finally, an iterative optimization strategy guaranteed with convergence is developed to directly solve the clustering results. Extensive experiments conducted on both real and synthetic datasets demonstrate the superiority of our proposed methods compared with several state-of-the-art clustering approaches.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا