Do you want to publish a course? Click here

Bayesian uncertainty quantification for data-driven equation learning

77   0   0.0 ( 0 )
 Added by Simon Martina-Perez
 Publication date 2021
  fields Biology
and research's language is English




Ask ChatGPT about the research

Equation learning aims to infer differential equation models from data. While a number of studies have shown that differential equation models can be successfully identified when the data are sufficiently detailed and corrupted with relatively small amounts of noise, the relationship between observation noise and uncertainty in the learned differential equation models remains unexplored. We demonstrate that for noisy data sets there exists great variation in both the structure of the learned differential equation models as well as the parameter values. We explore how to combine data sets to quantify uncertainty in the learned models, and at the same time draw mechanistic conclusions about the target differential equations. We generate noisy data using a stochastic agent-based model and combine equation learning methods with approximate Bayesian computation (ABC) to show that the correct differential equation model can be successfully learned from data, while a quantification of uncertainty is given by a posterior distribution in parameter space.



rate research

Read More

Due to the computational complexity of micro-swimmer models with fully resolved hydrodynamics, parameter estimation has been prohibitively expensive. Here, we describe a Bayesian uncertainty quantification framework that is highly parallelizable, making parameter estimation for complex forward models tractable. Using noisy in-silico data for swimmers, we demonstrate the methodologys robustness in estimating the fluid and elastic swimmer parameters. Our proposed methodology allows for analysis of real data and demonstrates potential for parameter estimation for various types of micro-swimmers. Better understanding the movement of elastic micro-structures in a viscous fluid could aid in developing artificial micro-swimmers for bio-medical applications as well as gain a fundamental understanding of the range of parameters that allow for certain motility patterns.
Standard approaches for uncertainty quantification in cardiovascular modeling pose challenges due to the large number of uncertain inputs and the significant computational cost of realistic three-dimensional simulations. We propose an efficient uncertainty quantification framework utilizing a multilevel multifidelity Monte Carlo estimator to improve the accuracy of hemodynamic quantities of interest while maintaining reasonable computational cost. This is achieved by leveraging three cardiovascular model fidelities, each with varying spatial resolution to rigorously quantify the variability in hemodynamic outputs. We employ two low-fidelity models to construct several different estimators. Our goal is to investigate and compare the efficiency of estimators built from combinations of these low-fidelity and high-fidelity models. We demonstrate this framework on healthy and diseased models of aortic and coronary anatomy, including uncertainties in material property and boundary condition parameters. We seek to demonstrate that for this application it is possible to accelerate the convergence of the estimators by utilizing a MLMF paradigm. Therefore, we compare our approach to Monte Carlo and multilevel Monte Carlo estimators based only on three-dimensional simulations. We demonstrate significant reduction in total computational cost with the MLMF estimators. We also examine the differing properties of the MLMF estimators in healthy versus diseased models, as well as global versus local quantities of interest. As expected, global quantities and healthy models show larger reductions than local quantities and diseased model, as the latter rely more heavily on the highest fidelity model evaluations. In all cases, our workflow coupling Dakotas MLMF estimators with the SimVascular cardiovascular modeling framework makes uncertainty quantification feasible for constrained computational budgets.
This work affords new insights into Bayesian CART in the context of structured wavelet shrinkage. The main thrust is to develop a formal inferential framework for Bayesian tree-based regression. We reframe Bayesian CART as a g-type prior which departs from the typical wavelet product priors by harnessing correlation induced by the tree topology. The practically used Bayesian CART priors are shown to attain adaptive near rate-minimax posterior concentration in the supremum norm in regression models. For the fundamental goal of uncertainty quantification, we construct adaptive confidence bands for the regression function with uniform coverage under self-similarity. In addition, we show that tree-posteriors enable optimal inference in the form of efficient confidence sets for smooth functionals of the regression function.
112 - Rui Tuo , Wenjia Wang 2020
Bayesian optimization is a class of global optimization techniques. It regards the underlying objective function as a realization of a Gaussian process. Although the outputs of Bayesian optimization are random according to the Gaussian process assumption, quantification of this uncertainty is rarely studied in the literature. In this work, we propose a novel approach to assess the output uncertainty of Bayesian optimization algorithms, in terms of constructing confidence regions of the maximum point or value of the objective function. These regions can be computed efficiently, and their confidence levels are guaranteed by newly developed uniform error bounds for sequential Gaussian process regression. Our theory provides a unified uncertainty quantification framework for all existing sequential sampling policies and stopping criteria.
139 - Jun Xu , Zhen Zhang , 2021
Within a Bayesian statistical framework using the standard Skyrme-Hartree-Fcok model, the maximum a posteriori (MAP) values and uncertainties of nuclear matter incompressibility and isovector interaction parameters are inferred from the experimental data of giant resonances and neutron-skin thicknesses of typical heavy nuclei. With the uncertainties of the isovector interaction parameters constrained by the data of the isovector giant dipole resonance and the neutron-skin thickness, we have obtained $K_0 = 223_{-8}^{+7}$ MeV at 68% confidence level using the data of the isoscalar giant monopole resonance in $^{208}$Pb measured at the Research Center for Nuclear Physics (RCNP), Japan, and at the Texas A&M University (TAMU), USA. Although the corresponding $^{120}$Sn data gives a MAP value for $K_0$ about 5 MeV smaller than the $^{208}$Pb data, there are significant overlaps in their posterior probability distribution functions.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا