Do you want to publish a course? Click here

Bayesian model selection: Application to adjustment of fundamental physical constants

351   0   0.0 ( 0 )
 Added by Olha Bodnar
 Publication date 2021
  fields Physics
and research's language is English




Ask ChatGPT about the research

The location-scale model is usually present in physics and chemistry in connection to the Birge ratio method for the adjustment of fundamental physical constants such as the Planck constant or the Newtonian constant of gravitation, while the random effects model is the commonly used approach for meta-analysis in medicine. These two competitive models are used to increase the quoted uncertainties of the measurement results to make them consistent. The intrinsic Bayes factor (IBF) is derived for the comparison of the random effects model to the location-scale model, and we answer the question which model performs better for the determination of the Newtonian constant of gravitation. The results of the empirical illustration support the application of the Birge ratio method which is currently used in the adjustment of the CODATA 2018 value for the Newtonian constant of gravitation together with its uncertainty. The results of the simulation study illustrate that the suggested procedure for model selection is decisive even when data consist of a few measurement results.



rate research

Read More

Model selection is a fundamental part of the applied Bayesian statistical methodology. Metrics such as the Akaike Information Criterion are commonly used in practice to select models but do not incorporate the uncertainty of the models parameters and can give misleading choices. One approach that uses the full posterior distribution is to compute the ratio of two models normalising constants, known as the Bayes factor. Often in realistic problems, this involves the integration of analytically intractable, high-dimensional distributions, and therefore requires the use of stochastic methods such as thermodynamic integration (TI). In this paper we apply a variation of the TI method, referred to as referenced TI, which computes a single models normalising constant in an efficient way by using a judiciously chosen reference density. The advantages of the approach and theoretical considerations are set out, along with explicit pedagogical 1 and 2D examples. Benchmarking is presented with comparable methods and we find favourable convergence performance. The approach is shown to be useful in practice when applied to a real problem - to perform model selection for a semi-mechanistic hierarchical Bayesian model of COVID-19 transmission in South Korea involving the integration of a 200D density.
Two dimensionless fundamental physical constants, the fine structure constant $alpha$ and the proton-to-electron mass ratio $frac{m_p}{m_e}$ are attributed a particular importance from the point of view of nuclear synthesis, formation of heavy elements, planets, and life-supporting structures. Here, we show that a combination of these two constants results in a new dimensionless constant which provides the upper bound for the speed of sound in condensed phases, $v_u$. We find that $frac{v_u}{c}=alphaleft(frac{m_e}{2m_p}right)^{frac{1}{2}}$, where $c$ is the speed of light in vacuum. We support this result by a large set of experimental data and first principles computations for atomic hydrogen. Our result expands current understanding of how fundamental constants can impose new bounds on important physical properties.
Viscosity of fluids is strongly system-dependent, varies across many orders of magnitude and depends on molecular interactions and structure in a complex way not amenable to first-principles theories. Despite the variations and theoretical difficulties, we find a new quantity setting the minimal kinematic viscosity of fluids: $ u_m=frac{1}{4pi}frac{hbar}{sqrt{m_em}}$, where $m_e$ and $m$ are electron and molecule masses. We subsequently introduce a new property, the elementary viscosity $iota$ with the lower bound set by fundamental physical constants and notably involving the proton-to-electron mass ratio: $iota_m=frac{hbar}{4pi}left({frac{m_p}{m_e}}right)^{frac{1}{2}}$, where $m_p$ is the proton mass. We discuss the connection of our result to the bound found by Kovtun, Son and Starinets in strongly-interacting field theories.
Intuitively, a scientist might assume that a more complex regression model will necessarily yield a better predictive model of experimental data. Herein, we disprove this notion in the context of extracting the proton charge radius from charge form factor data. Using a Monte Carlo study, we show that a simpler regression model can in certain cases be the better predictive model. This is especially true with noisy data where the complex model will fit the noise instead of the physical signal. Thus, in order to select the appropriate regression model to employ, a clear technique should be used such as the Akaike information criterion or Bayesian information criterion, and ideally selected previous to seeing the results. Also, to ensure a reasonable fit, the scientist should also make regression quality plots, such as residual plots, and not just rely on a single criterion such as reduced chi2. When we apply these techniques to low four-momentum transfer cross section data, we find a proton radius that is consistent with the muonic Lamb shift results. While presented for the case of proton radius extraction, these concepts are applicable in general and can be used to illustrate the necessity of balancing bias and variance when building a regression model and validating results, ideas that are at the heart of modern machine learning algorithms.
193 - I. Grabec 2007
Statistical modeling of experimental physical laws is based on the probability density function of measured variables. It is expressed by experimental data via a kernel estimator. The kernel is determined objectively by the scattering of data during calibration of experimental setup. A physical law, which relates measured variables, is optimally extracted from experimental data by the conditional average estimator. It is derived directly from the kernel estimator and corresponds to a general nonparametric regression. The proposed method is demonstrated by the modeling of a return map of noisy chaotic data. In this example, the nonparametric regression is used to predict a future value of chaotic time series from the present one. The mean predictor error is used in the definition of predictor quality, while the redundancy is expressed by the mean square distance between data points. Both statistics are used in a new definition of predictor cost function. From the minimum of the predictor cost function, a proper number of data in the model is estimated.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا