ترغب بنشر مسار تعليمي؟ اضغط هنا

Multi-fidelity machine-learning with uncertainty quantification and Bayesian optimization for materials design: Application to ternary random alloys

111   0   0.0 ( 0 )
 نشر من قبل Anh Tran
 تاريخ النشر 2020
والبحث باللغة English




اسأل ChatGPT حول البحث

We present a scale-bridging approach based on a multi-fidelity (MF) machine-learning (ML) framework leveraging Gaussian processes (GP) to fuse atomistic computational model predictions across multiple levels of fidelity. Through the posterior variance of the MFGP, our framework naturally enables uncertainty quantification, providing estimates of confidence in the predictions. We used Density Functional Theory as high-fidelity prediction, while a ML interatomic potential is used as the low-fidelity prediction. Practical materials design efficiency is demonstrated by reproducing the ternary composition dependence of a quantity of interest (bulk modulus) across the full aluminum-niobium-titanium ternary random alloy composition space. The MFGP is then coupled to a Bayesian optimization procedure and the computational efficiency of this approach is demonstrated by performing an on-the-fly search for the global optimum of bulk modulus in the ternary composition space. The framework presented in this manuscript is the first application of MFGP to atomistic materials simulations fusing predictions between Density Functional Theory and classical interatomic potential calculations.

قيم البحث

اقرأ أيضاً

112 - Rui Tuo , Wenjia Wang 2020
Bayesian optimization is a class of global optimization techniques. It regards the underlying objective function as a realization of a Gaussian process. Although the outputs of Bayesian optimization are random according to the Gaussian process assump tion, quantification of this uncertainty is rarely studied in the literature. In this work, we propose a novel approach to assess the output uncertainty of Bayesian optimization algorithms, in terms of constructing confidence regions of the maximum point or value of the objective function. These regions can be computed efficiently, and their confidence levels are guaranteed by newly developed uniform error bounds for sequential Gaussian process regression. Our theory provides a unified uncertainty quantification framework for all existing sequential sampling policies and stopping criteria.
Meta-learning, or learning to learn, offers a principled framework for few-shot learning. It leverages data from multiple related learning tasks to infer an inductive bias that enables fast adaptation on a new task. The application of meta-learning w as recently proposed for learning how to demodulate from few pilots. The idea is to use pilots received and stored for offline use from multiple devices in order to meta-learn an adaptation procedure with the aim of speeding up online training on new devices. Standard frequentist learning, which can yield relatively accurate hard classification decisions, is known to be poorly calibrated, particularly in the small-data regime. Poor calibration implies that the soft scores output by the demodulator are inaccurate estimates of the true probability of correct demodulation. In this work, we introduce the use of Bayesian meta-learning via variational inference for the purpose of obtaining well-calibrated few-pilot demodulators. In a Bayesian framework, each neural network weight is represented by a distribution, capturing epistemic uncertainty. Bayesian meta-learning optimizes over the prior distribution of the weights. The resulting Bayesian ensembles offer better calibrated soft decisions, at the computational cost of running multiple instances of the neural network for demodulation. Numerical results for single-input single-output Rayleigh fading channels with transmitters non-linearities are provided that compare symbol error rate and expected calibration error for both frequentist and Bayesian meta-learning, illustrating how the latter is both more accurate and better-calibrated.
Scenario optimization is by now a well established technique to perform designs in the presence of uncertainty. It relies on domain knowledge integrated with first-hand information that comes from data and generates solutions that are also accompanie d by precise statements of reliability. In this paper, following recent developments in (Garatti and Campi, 2019), we venture beyond the traditional set-up of scenario optimization by analyzing the concept of constraints relaxation. By a solid theoretical underpinning, this new paradigm furnishes fundamental tools to perform designs that meet a proper compromise between robustness and performance. After suitably expanding the scope of constraints relaxation as proposed in (Garatti and Campi, 2019), we focus on various classical Support Vector methods in machine learning - including SVM (Support Vector Machine), SVR (Support Vector Regression) and SVDD (Support Vector Data Description) - and derive new results for the ability of these methods to generalize.
We develop a fast multi-fidelity modeling method for very complex correlations between high- and low-fidelity data by working in modal space to extract the proper correlation function. We apply this method to infer the amplitude of motion of a flexib le marine riser in cross-flow, subject to vortex-induced vibrations (VIV). VIV are driven by an absolute instability in the flow, which imposes a frequency (Strouhal) law that requires a matching with the impedance of the structure; this matching is easily achieved because of the rapid parametric variation of the added mass force. As a result, the wavenumber of the riser spatial response is within narrow bands of uncertainty. Hence, an error in wavenumber prediction can cause significant phase-related errors in the shape of the amplitude of response along the riser, rendering correlation between low- and high-fidelity data very complex. Working in modal space as outlined herein, dense data from low-fidelity data, provided by the semi-empirical computer code VIVA, can correlate in modal space with few high-fidelity data, obtained from experiments or fully-resolved CFD simulations, to correct both phase and amplitude and provide predictions that agree very well overall with the correct shape of the amplitude response. We also quantify the uncertainty in the prediction using Bayesian modeling and exploit this uncertainty to formulate an active learning strategy for the best possible location of the sensors providing the high fidelity measurements.
Equation learning aims to infer differential equation models from data. While a number of studies have shown that differential equation models can be successfully identified when the data are sufficiently detailed and corrupted with relatively small amounts of noise, the relationship between observation noise and uncertainty in the learned differential equation models remains unexplored. We demonstrate that for noisy data sets there exists great variation in both the structure of the learned differential equation models as well as the parameter values. We explore how to combine data sets to quantify uncertainty in the learned models, and at the same time draw mechanistic conclusions about the target differential equations. We generate noisy data using a stochastic agent-based model and combine equation learning methods with approximate Bayesian computation (ABC) to show that the correct differential equation model can be successfully learned from data, while a quantification of uncertainty is given by a posterior distribution in parameter space.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا