ترغب بنشر مسار تعليمي؟ اضغط هنا

Enabling robust offline active learning for machine learning potentials using simple physics-based priors

167   0   0.0 ( 0 )
 نشر من قبل Muhammed Shuaibi
 تاريخ النشر 2020
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Machine learning surrogate models for quantum mechanical simulations has enabled the field to efficiently and accurately study material and molecular systems. Developed models typically rely on a substantial amount of data to make reliable predictions of the potential energy landscape or careful active learning and uncertainty estimates. When starting with small datasets, convergence of active learning approaches is a major outstanding challenge which limited most demonstrations to online active learning. In this work we demonstrate a $Delta$-machine learning approach that enables stable convergence in offline active learning strategies by avoiding unphysical configurations. We demonstrate our frameworks capabilities on a structural relaxation, transition state calculation, and molecular dynamics simulation, with the number of first principle calculations being cut down anywhere from 70-90%. The approach is incorporated and developed alongside AMPtorch, an open-source machine learning potential package, along with interactive Google Colab notebook examples.



قيم البحث

اقرأ أيضاً

Abstract Machine learning models, trained on data from ab initio quantum simulations, are yielding molecular dynamics potentials with unprecedented accuracy. One limiting factor is the quantity of available training data, which can be expensive to ob tain. A quantum simulation often provides all atomic forces, in addition to the total energy of the system. These forces provide much more information than the energy alone. It may appear that training a model to this large quantity of force data would introduce significant computational costs. Actually, training to all available force data should only be a few times more expensive than training to energies alone. Here, we present a new algorithm for efficient force training, and benchmark its accuracy by training to forces from real-world datasets for organic chemistry and bulk aluminum.
The discovery of new multicomponent inorganic compounds can provide direct solutions to many scientific and engineering challenges, yet the vast size of the uncharted material space dwarfs current synthesis throughput. While the computational crystal structure prediction is expected to mitigate this frustration, the NP-hardness and steep costs of density functional theory (DFT) calculations prohibit material exploration at scale. Herein, we introduce SPINNER, a highly efficient and reliable structure-prediction framework based on exhaustive random searches and evolutionary algorithms, which is completely free from empiricism. Empowered by accurate neural network potentials, the program can navigate the configuration space faster than DFT by more than 10$^{2}$-fold. In blind tests on 60 ternary compositions diversely selected from the experimental database, SPINNER successfully identifies experimental (or theoretically more stable) phases for ~80% of materials within 5000 generations, entailing up to half a million structure evaluations for each composition. When benchmarked against previous data mining or DFT-based evolutionary predictions, SPINNER identifies more stable phases in the majority of cases. By developing a reliable and fast structure-prediction framework, this work opens the door to large-scale, unbounded computational exploration of undiscovered inorganic crystals.
Machine learning of atomic-scale properties is revolutionizing molecular modelling, making it possible to evaluate inter-atomic potentials with first-principles accuracy, at a fraction of the costs. The accuracy, speed and reliability of machine-lear ning potentials, however, depends strongly on the way atomic configurations are represented, i.e. the choice of descriptors used as input for the machine learning method. The raw Cartesian coordinates are typically transformed in fingerprints, or symmetry functions, that are designed to encode, in addition to the structure, important properties of the potential-energy surface like its invariances with respect to rotation, translation and permutation of like atoms. Here we discuss automatic protocols to select a number of fingerprints out of a large pool of candidates, based on the correlations that are intrinsic to the training data. This procedure can greatly simplify the construction of neural network potentials that strike the best balance between accuracy and computational efficiency, and has the potential to accelerate by orders of magnitude the evaluation of Gaussian Approximation Potentials based on the Smooth Overlap of Atomic Positions kernel. We present applications to the construction of neural network potentials for water and for an Al-Mg-Si alloy, and to the prediction of the formation energies of small organic molecules using Gaussian process regression.
We propose a method to describe consistent equations of state (EOS) for arbitrary systems. Complex EOS are traditionally obtained by fitting suitable analytical expressions to thermophysical data. A key aspect of EOS are that the relationships betwee n state variables are given by derivatives of the system free energy. In this work, we model the free energy with an artificial neural network, and utilize automatic differentiation to directly learn to the derivatives of the free energy on two different data sets, the van der Waals system, and published data for the Lennard-Jones fluid. We show that this method is advantageous over direct learning of thermodynamic properties (i.e. not as derivatives of the free energy, but as independent properties), in terms of both accuracy and the exact preservation of the Maxwell relations. Furthermore, the method can implicitly solve the integration problem of computing the free energy of a system without explicit integration.
Material scientists are increasingly adopting the use of machine learning (ML) for making potentially important decisions, such as, discovery, development, optimization, synthesis and characterization of materials. However, despite MLs impressive per formance in commercial applications, several unique challenges exist when applying ML in materials science applications. In such a context, the contributions of this work are twofold. First, we identify common pitfalls of existing ML techniques when learning from underrepresented/imbalanced material data. Specifically, we show that with imbalanced data, standard methods for assessing quality of ML models break down and lead to misleading conclusions. Furthermore, we found that the models own confidence score cannot be trusted and model introspection methods (using simpler models) do not help as they result in loss of predictive performance (reliability-explainability trade-off). Second, to overcome these challenges, we propose a general-purpose explainable and reliable machine-learning framework. Specifically, we propose a novel pipeline that employs an ensemble of simpler models to reliably predict material properties. We also propose a transfer learning technique and show that the performance loss due to models simplicity can be overcome by exploiting correlations among different material properties. A new evaluation metric and a trust score to better quantify the confidence in the predictions are also proposed. To improve the interpretability, we add a rationale generator component to our framework which provides both model-level and decision-level explanations. Finally, we demonstrate the versatility of our technique on two applications: 1) predicting properties of crystalline compounds, and 2) identifying novel potentially stable solar cell materials.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا