ترغب بنشر مسار تعليمي؟ اضغط هنا

Does Bayesian Model Averaging improve polynomial extrapolations? Two toy problems as tests

58   0   0.0 ( 0 )
 نشر من قبل Daniel Phillips
 تاريخ النشر 2021
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

We assess the accuracy of Bayesian polynomial extrapolations from small parameter values, x, to large values of x. We consider a set of polynomials of fixed order, intended as a proxy for a fixed-order effective field theory (EFT) description of data. We employ Bayesian Model Averaging (BMA) to combine results from different order polynomials (EFT orders). Our study considers two toy problems where the underlying function used to generate data sets is known. We use Bayesian parameter estimation to extract the polynomial coefficients that describe these data at low x. A naturalness prior is imposed on the coefficients, so that they are O(1). We Bayesian-Model-Average different polynomial degrees by weighting each according to its Bayesian evidence and compare the predictive performance of this Bayesian Model Average with that of the individual polynomials. The credibility intervals on the BMA forecast have the stated coverage properties more consistently than does the highest evidence polynomial, though BMA does not necessarily outperform every polynomial.

قيم البحث

اقرأ أيضاً

303 - Andrew Fowlie 2020
We consider the Jeffreys-Lindley paradox from an objective Bayesian perspective by attempting to find priors representing complete indifference to sample size in the problem. This means that we ensure that the prior for the unknown mean and the prior predictive for the $t$-statistic are independent of the sample size. If successful, this would lead to Bayesian model comparison that was independent of sample size and ameliorate the paradox. Unfortunately, it leads to an improper scale-invariant prior for the unknown mean. We show, however, that a truncated scale-invariant prior delays the dependence on sample size, which could be practically significant. Lastly, we shed light on the paradox by relating it to the fact that the scale-invariant prior is improper.
We describe the Bayesian Analysis of Nuclear Dynamics (BAND) framework, a cyberinfrastructure that we are developing which will unify the treatment of nuclear models, experimental data, and associated uncertainties. We overview the statistical princi ples and nuclear-physics contexts underlying the BAND toolset, with an emphasis on Bayesian methodologys ability to leverage insight from multiple models. In order to facilitate understanding of these tools we provide a simple and accessible example of the BAND frameworks application. Four case studies are presented to highlight how elements of the framework will enable progress on complex, far-ranging problems in nuclear physics. By collecting notation and terminology, providing illustrative examples, and giving an overview of the associated techniques, this paper aims to open paths through which the nuclear physics and statistics communities can contribute to and build upon the BAND framework.
The main goal of the LISA Pathfinder (LPF) mission is to fully characterize the acceleration noise models and to test key technologies for future space-based gravitational-wave observatories similar to the eLISA concept. The data analysis team has de veloped complex three-dimensional models of the LISA Technology Package (LTP) experiment on-board LPF. These models are used for simulations, but more importantly, they will be used for parameter estimation purposes during flight operations. One of the tasks of the data analysis team is to identify the physical effects that contribute significantly to the properties of the instrument noise. A way of approaching this problem is to recover the essential parameters of a LTP model fitting the data. Thus, we want to define the simplest model that efficiently explains the observations. To do so, adopting a Bayesian framework, one has to estimate the so-called Bayes Factor between two competing models. In our analysis, we use three main different methods to estimate it: The Reversible Jump Markov Chain Monte Carlo method, the Schwarz criterion, and the Laplace approximation. They are applied to simulated LPF experiments where the most probable LTP model that explains the observations is recovered. The same type of analysis presented in this paper is expected to be followed during flight operations. Moreover, the correlation of the output of the aforementioned methods with the design of the experiment is explored.
204 - Miaomiao Wang , Guohua Zou 2019
Model averaging considers the model uncertainty and is an alternative to model selection. In this paper, we propose a frequentist model averaging estimator for composite quantile regressions. In recent years, research on these topics has been added a s a separate method, but no study has investigated them in combination. We apply a delete-one cross-validation method to estimate the model weights, and prove that the jackknife model averaging estimator is asymptotically optimal in terms of minimizing out-of-sample composite final prediction error. Simulations are conducted to demonstrate the good finite sample properties of our estimator and compare it with commonly used model selection and averaging methods. The proposed method is applied to the analysis of the stock returns data and the wage data and performs well.
We present the Sequential Ensemble Transform (SET) method, an approach for generating approximate samples from a Bayesian posterior distribution. The method explores the posterior distribution by solving a sequence of discrete optimal transport probl ems to produce a series of transport plans which map prior samples to posterior samples. We prove that the sequence of Dirac mixture distributions produced by the SET method converges weakly to the true posterior as the sample size approaches infinity. Furthermore, our numerical results indicate that, when compared to standard Sequential Monte Carlo (SMC) methods, the SET approach is more robust to the choice of Markov mutation kernels and requires less computational efforts to reach a similar accuracy when used to explore complex posterior distributions. Finally, we describe adaptive schemes that allow to completely automate the use of the SET method.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا