ترغب بنشر مسار تعليمي؟ اضغط هنا

A Bayesian inference and model selection algorithm with an optimisation scheme to infer the model noise power

121   0   0.0 ( 0 )
 نشر من قبل Javier Lopez-Santiago
 تاريخ النشر 2021
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Model fitting is possibly the most extended problem in science. Classical approaches include the use of least-squares fitting procedures and maximum likelihood methods to estimate the value of the parameters in the model. However, in recent years, Bayesian inference tools have gained traction. Usually, Markov chain Monte Carlo methods are applied to inference problems, but they present some disadvantages, particularly when comparing different models fitted to the same dataset. Other Bayesian methods can deal with this issue in a natural and effective way. We have implemented an importance sampling algorithm adapted to Bayesian inference problems in which the power of the noise in the observations is not known a priori. The main advantage of importance sampling is that the model evidence can be derived directly from the so-called importance weights -- while MCMC methods demand considerable postprocessing. The use of our adaptive target, adaptive importance sampling (ATAIS) method is shown by inferring, on the one hand, the parameters of a simulated flaring event which includes a damped oscillation {and, on the other hand, real data from the Kepler mission. ATAIS includes a novel automatic adaptation of the target distribution. It automatically estimates the variance of the noise in the model. ATAIS admits parallelisation, which decreases the computational run-times notably. We compare our method against a nested sampling method within a model selection problem.



قيم البحث

اقرأ أيضاً

The main goal of the LISA Pathfinder (LPF) mission is to fully characterize the acceleration noise models and to test key technologies for future space-based gravitational-wave observatories similar to the eLISA concept. The data analysis team has de veloped complex three-dimensional models of the LISA Technology Package (LTP) experiment on-board LPF. These models are used for simulations, but more importantly, they will be used for parameter estimation purposes during flight operations. One of the tasks of the data analysis team is to identify the physical effects that contribute significantly to the properties of the instrument noise. A way of approaching this problem is to recover the essential parameters of a LTP model fitting the data. Thus, we want to define the simplest model that efficiently explains the observations. To do so, adopting a Bayesian framework, one has to estimate the so-called Bayes Factor between two competing models. In our analysis, we use three main different methods to estimate it: The Reversible Jump Markov Chain Monte Carlo method, the Schwarz criterion, and the Laplace approximation. They are applied to simulated LPF experiments where the most probable LTP model that explains the observations is recovered. The same type of analysis presented in this paper is expected to be followed during flight operations. Moreover, the correlation of the output of the aforementioned methods with the design of the experiment is explored.
114 - Eric Thrane , Colm Talbot 2018
This is an introduction to Bayesian inference with a focus on hierarchical models and hyper-parameters. We write primarily for an audience of Bayesian novices, but we hope to provide useful insights for seasoned veterans as well. Examples are drawn f rom gravitational-wave astronomy, though we endeavor for the presentation to be understandable to a broader audience. We begin with a review of the fundamentals: likelihoods, priors, and posteriors. Next, we discuss Bayesian evidence, Bayes factors, odds ratios, and model selection. From there, we describe how posteriors are estimated using samplers such as Markov Chain Monte Carlo algorithms and nested sampling. Finally, we generalize the formalism to discuss hyper-parameters and hierarchical models. We include extensive appendices discussing the creation of credible intervals, Gaussian noise, explicit marginalization, posterior predictive distributions, and selection effects.
Robust model-fitting to spectroscopic transitions is a requirement across many fields of science. The corrected Akaike and Bayesian information criteria (AICc and BIC) are most frequently used to select the optimal number of fitting parameters. In ge neral, AICc modelling is thought to overfit (too many model parameters) and BIC underfits. For spectroscopic modelling, both AICc and BIC lack in two important respects: (a) no penalty distinction is made according to line strength such that parameters of weak lines close to the detection threshold are treated with equal importance as strong lines and (b) no account is taken of the way in which spectral lines impact on narrow data regions. In this paper we introduce a new information criterion that addresses these shortcomings, the Spectral Information Criterion (SpIC). Spectral simulations are used to compare performances. The main findings are (i) SpIC clearly outperforms AICc for high signal to noise data, (ii) SpIC and AICc work equally well for lower signal to noise data, although SpIC achieves this with fewer parameters, and (iii) BIC does not perform well (for this application) and should be avoided. The new method should be of broader applicability (beyond spectroscopy), wherever different model parameters influence separated small ranges within a larger dataset and/or have widely varying sensitivities.
117 - S. Y. BenZvi 2011
A common problem in ultra-high energy cosmic ray physics is the comparison of energy spectra. The question is whether the spectra from two experiments or two regions of the sky agree within their statistical and systematic uncertainties. We develop a method to directly compare energy spectra for ultra-high energy cosmic rays from two different regions of the sky in the same experiment without reliance on agreement with a theoretical model of the energy spectra. The consistency between the two spectra is expressed in terms of a Bayes factor, defined here as the ratio of the likelihood of the two-parent source hypothesis to the likelihood of the one-parent source hypothesis. Unlike other methods, for example chi^2 tests, the Bayes factor allows for the calculation of the posterior odds ratio and correctly accounts for non-Gaussian uncertainties. The latter is particularly important at the highest energies, where the number of events is very small.
277 - Owen C. Madin 2021
A high level of physical detail in a molecular model improves its ability to perform high accuracy simulations, but can also significantly affect its complexity and computational cost. In some situations, it is worthwhile to add additional complexity to a model to capture properties of interest; in others, additional complexity is unnecessary and can make simulations computationally infeasible. In this work we demonstrate the use of Bayes factors for molecular model selection, using Monte Carlo sampling techniques to evaluate the evidence for different levels of complexity in the two-centered Lennard-Jones + quadrupole (2CLJQ) fluid model. Examining three levels of nested model complexity, we demonstrate that the use of variable quadrupole and bond length parameters in this model framework is justified only sometimes. We also explore the effect of the Bayesian prior distribution on the Bayes factors, as well as ways to propose meaningful prior distributions. This Bayesian Markov Chain Monte Carlo (MCMC) process is enabled by the use of analytical surrogate models that accurately approximate the physical properties of interest. This work paves the way for further atomistic model selection work via Bayesian inference and surrogate modeling
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا