ترغب بنشر مسار تعليمي؟ اضغط هنا

Calibrating generalized predictive distributions

65   0   0.0 ( 0 )
 نشر من قبل Ryan Martin
 تاريخ النشر 2021
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

In prediction problems, it is common to model the data-generating process and then use a model-based procedure, such as a Bayesian predictive distribution, to quantify uncertainty about the next observation. However, if the posited model is misspecified, then its predictions may not be calibrated -- that is, the predictive distributions quantiles may not be nominal frequentist prediction upper limits, even asymptotically. Rather than abandoning the comfort of a model-based formulation for a more complicated non-model-based approach, here we propose a strategy in which the data itself helps determine if the assumed model-based solution should be adjusted to account for model misspecification. This is achieved through a generalized Bayes formulation where a learning rate parameter is tuned, via the proposed generalized predictive calibration (GPrC) algorithm, to make the predictive distribution calibrated, even under model misspecification. Extensive numerical experiments are presented, under a variety of settings, demonstrating the proposed GPrC algorithms validity, efficiency, and robustness.

قيم البحث

اقرأ أيضاً

150 - Vaidehi Dixit , Ryan Martin 2020
Mixture models are commonly used when data show signs of heterogeneity and, often, it is important to estimate the distribution of the latent variable responsible for that heterogeneity. This is a common problem for data taking values in a Euclidean space, but the work on mixing distribution estimation based on directional data taking values on the unit sphere is limited. In this paper, we propose using the predictive recursion (PR) algorithm to solve for a mixture on a sphere. One key feature of PR is its computational efficiency. Moreover, compared to likelihood-based methods that only support finite mixing distribution estimates, PR is able to estimate a smooth mixing density. PRs asymptotic consistency in spherical mixture models is established, and simulation results showcase its benefits compared to existing likelihood-based methods. We also show two real-data examples to illustrate how PR can be used for goodness-of-fit testing and clustering.
We propose and analyze a generalized splitting method to sample approximately from a distribution conditional on the occurrence of a rare event. This has important applications in a variety of contexts in operations research, engineering, and computa tional statistics. The method uses independent trials starting from a single particle. We exploit this independence to obtain asymptotic and non-asymptotic bounds on the total variation error of the sampler. Our main finding is that the approximation error depends crucially on the relative variability of the number of points produced by the splitting algorithm in one run, and that this relative variability can be readily estimated via simulation. We illustrate the relevance of the proposed method on an application in which one needs to sample (approximately) from an intractable posterior density in Bayesian inference.
Under measurement constraints, responses are expensive to measure and initially unavailable on most of records in the dataset, but the covariates are available for the entire dataset. Our goal is to sample a relatively small portion of the dataset wh ere the expensive responses will be measured and the resultant sampling estimator is statistically efficient. Measurement constraints require the sampling probabilities can only depend on a very small set of the responses. A sampling procedure that uses responses at most only on a small pilot sample will be called response-free. We propose a response-free sampling procedure mbox{(OSUMC)} for generalized linear models (GLMs). Using the A-optimality criterion, i.e., the trace of the asymptotic variance, the resultant estimator is statistically efficient within a class of sampling estimators. We establish the unconditional asymptotic distribution of a general class of response-free sampling estimators. This result is novel compared with the existing conditional results obtained by conditioning on both covariates and responses. Under our unconditional framework, the subsamples are no longer independent and new martingale techniques are developed for our asymptotic theory. We further derive the A-optimal response-free sampling distribution. Since this distribution depends on population level quantities, we propose the Optimal Sampling Under Measurement Constraints (OSUMC) algorithm to approximate the theoretical optimal sampling. Finally, we conduct an intensive empirical study to demonstrate the advantages of OSUMC algorithm over existing methods in both statistical and computational perspectives.
Environmental variability often has substantial impacts on natural populations and communities through its effects on the performance of individuals. Because organisms responses to environmental conditions are often nonlinear (e.g., decreasing perfor mance on both sides of an optimal temperature), the mean response is often different from the response in the mean environment. Ye et. al. 2020, proposed testing for the presence of such variance effects on individual or population growth rates by estimating the Jensen Effect, the difference in average growth rates under varying versus fixed environments, in functional single index models for environmental effects on growth. In this paper, we extend this analysis to effect of environmental variance on reproduction and survival, which have count and binary outcomes. In the standard generalized linear models used to analyze such data the direction of the Jensen Effect is tacitly assumed a priori by the models link function. Here we extend the methods of Ye et. al. 2020 using a generalized single index model to test whether this assumed direction is contradicted by the data. We show that our test has reasonable power under mild alternatives, but requires sample sizes that are larger than are often available. We demonstrate our methods on a long-term time series of plant ground cover on the Idaho steppe.
Generalized Bayes posterior distributions are formed by putting a fractional power on the likelihood before combining with the prior via Bayess formula. This fractional power, which is often viewed as a remedy for potential model misspecification bia s, is called the learning rate, and a number of data-driven learning rate selection methods have been proposed in the recent literature. Each of these proposals has a different focus, a different target they aim to achieve, which makes them difficult to compare. In this paper, we provide a direct head-to-head comparison of these learning rate selection methods in various misspecified model scenarios, in terms of several relevant metrics, in particular, coverage probability of the generalized Bayes credible regions. In some examples all the methods perform well, while in others the misspecification is too severe to be overcome, but we find that the so-called generalized posterior calibration algorithm tends to outperform the others in terms of credible region coverage probability.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا