Do you want to publish a course? Click here

Adaptive selection of sampling points for Uncertainty Quantification

94   0   0.0 ( 0 )
 Added by Enrico Camporeale
 Publication date 2016
  fields Physics
and research's language is English




Ask ChatGPT about the research

We present a simple and robust strategy for the selection of sampling points in Uncertainty Quantification. The goal is to achieve the fastest possible convergence in the cumulative distribution function of a stochastic output of interest. We assume that the output of interest is the outcome of a computationally expensive nonlinear mapping of an input random variable, whose probability density function is known. We use a radial function basis to construct an accurate interpolant of the mapping. This strategy enables adding new sampling points one at a time, adaptively. This takes into full account the previous evaluations of the target nonlinear function. We present comparisons with a stochastic collocation method based on the Clenshaw-Curtis quadrature rule, and with an adaptive method based on hierarchical surplus, showing that the new method often results in a large computational saving.



rate research

Read More

The macroscopic behavior of many materials is complex and the end result of mechanisms that operate across a broad range of disparate scales. An imperfect knowledge of material behavior across scales is a source of epistemic uncertainty of the overall material behavior. However, assessing this uncertainty is difficult due to the complex nature of material response and the prohibitive computational cost of integral calculations. In this paper, we exploit the multiscale and hierarchical nature of material response to develop an approach to quantify the overall uncertainty of material response without the need for integral calculations. Specifically, we bound the uncertainty at each scale and then combine the partial uncertainties in a way that provides a bound on the overall or integral uncertainty. The bound provides a conservative estimate on the uncertainty. Importantly, this approach does not require integral calculations that are prohibitively expensive. We demonstrate the framework on the problem of ballistic impact of a polycrystalline magnesium plate. Magnesium and its alloys are of current interest as promising light-weight structural and protective materials. Finally, we remark that the approach can also be used to study the sensitivity of the overall response to particular mechanisms at lower scales in a materials-by-design approach.
The forward problems of pattern formation have been greatly empowered by extensive theoretical studies and simulations, however, the inverse problem is less well understood. It remains unclear how accurately one can use images of pattern formation to learn the functional forms of the nonlinear and nonlocal constitutive relations in the governing equation. We use PDE-constrained optimization to infer the governing dynamics and constitutive relations and use Bayesian inference and linearization to quantify their uncertainties in different systems, operating conditions, and imaging conditions. We discuss the conditions to reduce the uncertainty of the inferred functions and the correlation between them, such as state-dependent free energy and reaction kinetics (or diffusivity). We present the inversion algorithm and illustrate its robustness and uncertainties under limited spatiotemporal resolution, unknown boundary conditions, blurry initial conditions, and other non-ideal situations. Under certain situations, prior physical knowledge can be included to constrain the result. Phase-field, reaction-diffusion, and phase-field-crystal models are used as model systems. The approach developed here can find applications in inferring unknown physical properties of complex pattern-forming systems and in guiding their experimental design.
Gaussian Processes (textbf{GPs}) are flexible non-parametric models with strong probabilistic interpretation. While being a standard choice for performing inference on time series, GPs have few techniques to work in a streaming setting. cite{bui2017streaming} developed an efficient variational approach to train online GPs by using sparsity techniques: The whole set of observations is approximated by a smaller set of inducing points (textbf{IPs}) and moved around with new data. Both the number and the locations of the IPs will affect greatly the performance of the algorithm. In addition to optimizing their locations, we propose to adaptively add new points, based on the properties of the GP and the structure of the data.
Parameterization of interatomic forcefields is a necessary first step in performing molecular dynamics simulations. This is a non-trivial global optimization problem involving quantification of multiple empirical variables against one or more properties. We present EZFF, a lightweight Python library for parameterization of several types of interatomic forcefields implemented in several molecular dynamics engines against multiple objectives using genetic-algorithm-based global optimization methods. The EZFF scheme provides unique functionality such as the parameterization of hybrid forcefields composed of multiple forcefield interactions as well as built-in quantification of uncertainty in forcefield parameters and can be easily extended to other forcefield functional forms as well as MD engines.
We consider the problem of parameterizing Newman-type models of Li-ion batteries focusing on quantifying the inherent uncertainty of this process and its dependence on the discharge rate. In order to rule out genuine experimental error and instead isolate the intrinsic uncertainty of model fitting, we concentrate on an idealized setting where synthetic measurements in the form of voltage curves are manufactured using the full, and most accurate, Newman model with parameter values considered true, whereas parameterization is performed using simplifi
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا