ﻻ يوجد ملخص باللغة العربية
We present a scale-bridging approach based on a multi-fidelity (MF) machine-learning (ML) framework leveraging Gaussian processes (GP) to fuse atomistic computational model predictions across multiple levels of fidelity. Through the posterior variance of the MFGP, our framework naturally enables uncertainty quantification, providing estimates of confidence in the predictions. We used Density Functional Theory as high-fidelity prediction, while a ML interatomic potential is used as the low-fidelity prediction. Practical materials design efficiency is demonstrated by reproducing the ternary composition dependence of a quantity of interest (bulk modulus) across the full aluminum-niobium-titanium ternary random alloy composition space. The MFGP is then coupled to a Bayesian optimization procedure and the computational efficiency of this approach is demonstrated by performing an on-the-fly search for the global optimum of bulk modulus in the ternary composition space. The framework presented in this manuscript is the first application of MFGP to atomistic materials simulations fusing predictions between Density Functional Theory and classical interatomic potential calculations.
Bayesian optimization is a class of global optimization techniques. It regards the underlying objective function as a realization of a Gaussian process. Although the outputs of Bayesian optimization are random according to the Gaussian process assump
Meta-learning, or learning to learn, offers a principled framework for few-shot learning. It leverages data from multiple related learning tasks to infer an inductive bias that enables fast adaptation on a new task. The application of meta-learning w
Scenario optimization is by now a well established technique to perform designs in the presence of uncertainty. It relies on domain knowledge integrated with first-hand information that comes from data and generates solutions that are also accompanie
We develop a fast multi-fidelity modeling method for very complex correlations between high- and low-fidelity data by working in modal space to extract the proper correlation function. We apply this method to infer the amplitude of motion of a flexib
Equation learning aims to infer differential equation models from data. While a number of studies have shown that differential equation models can be successfully identified when the data are sufficiently detailed and corrupted with relatively small