ترغب بنشر مسار تعليمي؟ اضغط هنا

A truncated mean-parameterised Conway-Maxwell-Poisson model for the analysis of Test match bowlers

134   0   0.0 ( 0 )
 نشر من قبل Pete Philipson
 تاريخ النشر 2021
  مجال البحث الاحصاء الرياضي
والبحث باللغة English
 تأليف Pete Philipson




اسأل ChatGPT حول البحث

Assessing the relative merits of sportsmen and women whose careers took place far apart in time via a suitable statistical model is a complex task as any comparison is compromised by fundamental changes to the sport and society and often handicapped by the popularity of inappropriate traditional metrics. In this work we focus on cricket and the ranking of Test match bowlers using bowling data from the first Test in 1877 onwards. A truncated, mean-parameterised Conway-Maxwell-Poisson model is developed to handle the under- and overdispersed nature of the data, which are in the form of small counts, and to extract the innate ability of individual bowlers. Inferences are made using a Bayesian approach by deploying a Markov Chain Monte Carlo algorithm to obtain parameter estimates and confidence intervals. The model offers a good fit and indicates that the commonly used bowling average is a flawed measure.

قيم البحث

اقرأ أيضاً

68 - Ricardo S Ehlers 2019
We propose a flexible model for count time series which has potential uses for both underdispersed and overdispersed data. The model is based on the Conway-Maxwell-Poisson (COM-Poisson) distribution with parameters varying along time to take serial c orrelation into account. Model estimation is challenging however and require the application of recently proposed methods to deal with the intractable normalising constant as well as efficiently sampling values from the COM-Poisson distribution.
Sequential, multiple assignment, randomized trial (SMART) designs have become increasingly popular in the field of precision medicine by providing a means for comparing sequences of treatments tailored to the individual patient, i.e., dynamic treatme nt regime (DTR). The construction of evidence-based DTRs promises a replacement to adhoc one-size-fits-all decisions pervasive in patient care. However, there are substantial statistical challenges in sizing SMART designs due to the complex correlation structure between the DTRs embedded in the design. Since the primary goal of SMARTs is the construction of an optimal DTR, investigators are interested in sizing SMARTs based on the ability to screen out DTRs inferior to the optimal DTR by a given amount which cannot be done using existing methods. In this paper, we fill this gap by developing a rigorous power analysis framework that leverages multiple comparisons with the best methodology. Our method employs Monte Carlo simulation in order to compute the minimum number of individuals to enroll in an arbitrary SMART. We will evaluate our method through extensive simulation studies. We will illustrate our method by retrospectively computing the power in the Extending Treatment Effectiveness of Naltrexone SMART study.
Exploratory spatial data analysis (ESDA) plays a key role in research that includes geographic data. In ESDA, analysts often want to be able to visualize observations and local relationships on a map. However, software dedicated to visualizing local spatial relations be-tween multiple variables in high dimensional datasets remains undeveloped. This paper introduces gwpcorMapper, a newly developed software application for mapping geographically weighted correlation and partial correlation in large multivariate datasets. gwpcorMap-per facilitates ESDA by giving researchers the ability to interact with map components that describe local correlative relationships. We built gwpcorMapper using the R Shiny framework. The software inherits its core algorithm from GWpcor, an R library for calculating the geographically weighted correlation and partial correlation statistics. We demonstrate the application of gwpcorMapper by using it to explore census data in order to find meaningful relationships that describe the work-life environment in the 23 special wards of Tokyo, Japan. We show that gwpcorMapper is useful in both variable selection and parameter tuning for geographically weighted statistics. gwpcorMapper highlights that there are strong statistically clear local variations in the relationship between the number of commuters and the total number of hours worked when considering the total population in each district across the 23 special wards of Tokyo. Our application demonstrates that the ESDA process with high-dimensional geospatial data using gwpcorMapper has applications across multiple fields.
Infectious diseases on farms pose both public and animal health risks, so understanding how they spread between farms is crucial for developing disease control strategies to prevent future outbreaks. We develop novel Bayesian nonparametric methodolog y to fit spatial stochastic transmission models in which the infection rate between any two farms is a function that depends on the distance between them, but without assuming a specified parametric form. Making nonparametric inference in this context is challenging since the likelihood function of the observed data is intractable because the underlying transmission process is unobserved. We adopt a fully Bayesian approach by assigning a transformed Gaussian Process prior distribution to the infection rate function, and then develop an efficient data augmentation Markov Chain Monte Carlo algorithm to perform Bayesian inference. We use the posterior predictive distribution to simulate the effect of different disease control methods and their economic impact. We analyse a large outbreak of Avian Influenza in the Netherlands and infer the between-farm infection rate, as well as the unknown infection status of farms which were pre-emptively culled. We use our results to analyse ring-culling strategies, and conclude that although effective, ring-culling has limited impact in high density areas.
The analysis of rank ordered data has a long history in the statistical literature across a diverse range of applications. In this paper we consider the Extended Plackett-Luce model that induces a flexible (discrete) distribution over permutations. T he parameter space of this distribution is a combination of potentially high-dimensional discrete and continuous components and this presents challenges for parameter interpretability and also posterior computation. Particular emphasis is placed on the interpretation of the parameters in terms of observable quantities and we propose a general framework for preserving the mode of the prior predictive distribution. Posterior sampling is achieved using an effective simulation based approach that does not require imposing restrictions on the parameter space. Working in the Bayesian framework permits a natural representation of the posterior predictive distribution and we draw on this distribution to address the rank aggregation problem and also to identify potential lack of model fit. The flexibility of the Extended Plackett-Luce model along with the effectiveness of the proposed sampling scheme are demonstrated using several simulation studies and real data examples.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا