ترغب بنشر مسار تعليمي؟ اضغط هنا

Modelling Competitive Sports: Bradley-Terry-{E}lH{o} Models for Supervised and On-Line Learning of Paired Competition Outcomes

53   0   0.0 ( 0 )
 نشر من قبل Franz J. Kir\\'aly
 تاريخ النشر 2017
والبحث باللغة English




اسأل ChatGPT حول البحث

Prediction and modelling of competitive sports outcomes has received much recent attention, especially from the Bayesian statistics and machine learning communities. In the real world setting of outcome prediction, the seminal {E}lH{o} update still remains, after more than 50 years, a valuable baseline which is difficult to improve upon, though in its original form it is a heuristic and not a proper statistical model. Mathematically, the {E}lH{o} rating system is very closely related to the Bradley-Terry models, which are usually used in an explanatory fashion rather than in a predictive supervised or on-line learning setting. Exploiting this close link between these two model classes and some newly observed similarities, we propose a new supervised learning framework with close similarities to logistic regression, low-rank matrix completion and neural networks. Building on it, we formulate a class of structured log-odds models, unifying the desirable properties found in the above: supervised probabilistic prediction of scores and wins/draws/losses, batch/epoch and on-line learning, as well as the possibility to incorporate features in the prediction, without having to sacrifice simplicity, parsimony of the Bradley-Terry models, or computational efficiency of {E}lH{o}s original approach. We validate the structured log-odds modelling approach in synthetic experiments and English Premier League outcomes, where the added expressivity yields the best predictions reported in the state-of-art, close to the quality of contemporary betting odds.

قيم البحث

اقرأ أيضاً

201 - John T. Whelan 2017
The Bradley-Terry model assigns probabilities for the outcome of paired comparison experiments based on strength parameters associated with the objects being compared. We consider different proposed choices of prior parameter distributions for Bayesi an inference of the strength parameters based on the paired comparison results. We evaluate them according to four desiderata motivated by the use of inferred Bradley-Terry parameters to rate teams on the basis of outcomes of a set of games: invariance under interchange of teams, invariance under interchange of winning and losing, normalizability and invariance under elimination of teams. We consider various proposals which fail to satisfy one or more of these desiderata, and illustrate two proposals which satisfy them. Both are one-parameter independent distributions for the logarithms of the team strengths: 1) Gaussian and 2) Type III generalized logistic.
Recently proposed consistency-based Semi-Supervised Learning (SSL) methods such as the $Pi$-model, temporal ensembling, the mean teacher, or the virtual adversarial training, have advanced the state of the art in several SSL tasks. These methods can typically reach performances that are comparable to their fully supervised counterparts while using only a fraction of labelled examples. Despite these methodological advances, the understanding of these methods is still relatively limited. In this text, we analyse (variations of) the $Pi$-model in settings where analytically tractable results can be obtained. We establish links with Manifold Tangent Classifiers and demonstrate that the quality of the perturbations is key to obtaining reasonable SSL performances. Importantly, we propose a simple extension of the Hidden Manifold Model that naturally incorporates data-augmentation schemes and offers a framework for understanding and experimenting with SSL methods.
We propose a time-varying generalization of the Bradley-Terry model that allows for nonparametric modeling of dynamic global rankings of distinct teams. We develop a novel estimator that relies on kernel smoothing to pre-process the pairwise comparis ons over time and is applicable in sparse settings where the Bradley-Terry may not be fit. We obtain necessary and sufficient conditions for the existence and uniqueness of our estimator. We also derive time-varying oracle bounds for both the estimation error and the excess risk in the model-agnostic setting where the Bradley-Terry model is not necessarily the true data generating process. We thoroughly test the practical effectiveness of our model using both simulated and real world data and suggest an efficient data-driven approach for bandwidth tuning.
A common problem faced in statistical inference is drawing conclusions from paired comparisons, in which two objects compete and one is declared the victor. A probabilistic approach to such a problem is the Bradley-Terry model, first studied by Zerme lo in 1929 and rediscovered by Bradley and Terry in 1952. One obvious area of application for such a model is sporting events, and in particular Major League Baseball. With this in mind, we describe a hierarchical Bayesian version of Bradley-Terry suitable for use in ranking and prediction problems, and compare results from these application domains to standard maximum likelihood approaches. Our Bayesian methods outperform the MLE-based analogues, while being simple to construct, implement, and interpret.
In many application settings, the data have missing entries which make analysis challenging. An abundant literature addresses missing values in an inferential framework: estimating parameters and their variance from incomplete tables. Here, we consid er supervised-learning settings: predicting a target when missing values appear in both training and testing data. We show the consistency of two approaches in prediction. A striking result is that the widely-used method of imputing with a constant, such as the mean prior to learning is consistent when missing values are not informative. This contrasts with inferential settings where mean imputation is pointed at for distorting the distribution of the data. That such a simple approach can be consistent is important in practice. We also show that a predictor suited for complete observations can predict optimally on incomplete data,through multiple imputation.Finally, to compare imputation with learning directly with a model that accounts for missing values, we analyze further decision trees. These can naturally tackle empirical risk minimization with missing values, due to their ability to handle the half-discrete nature of incomplete variables. After comparing theoretically and empirically different missing values strategies in trees, we recommend using the missing incorporated in attribute method as it can handle both non-informative and informative missing values.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا