Do you want to publish a course? Click here

Atom2Vec: learning atoms for materials discovery

143   0   0.0 ( 0 )
 Added by Quan Zhou
 Publication date 2018
  fields Physics
and research's language is English




Ask ChatGPT about the research

Exciting advances have been made in artificial intelligence (AI) during the past decades. Among them, applications of machine learning (ML) and deep learning techniques brought human-competitive performances in various tasks of fields, including image recognition, speech recognition and natural language understanding. Even in Go, the ancient game of profound complexity, the AI player already beat human world champions convincingly with and without learning from human. In this work, we show that our unsupervised machines (Atom2Vec) can learn the basic properties of atoms by themselves from the extensive database of known compounds and materials. These learned properties are represented in terms of high dimensional vectors, and clustering of atoms in vector space classifies them into meaningful groups in consistent with human knowledge. We use the atom vectors as basic input units for neural networks and other ML models designed and trained to predict materials properties, which demonstrate significant accuracy.



rate research

Read More

288 - Chi Chen , Zhi Deng , Richard Tran 2017
In this work, we present a highly accurate spectral neighbor analysis potential (SNAP) model for molybdenum (Mo) developed through the rigorous application of machine learning techniques on large materials data sets. Despite Mos importance as a structural metal, existing force fields for Mo based on the embedded atom and modified embedded atom methods still do not provide satisfactory accuracy on many properties. We will show that by fitting to the energies, forces and stress tensors of a large density functional theory (DFT)-computed dataset on a diverse set of Mo structures, a Mo SNAP model can be developed that achieves close to DFT accuracy in the prediction of a broad range of properties, including energies, forces, stresses, elastic constants, melting point, phonon spectra, surface energies, grain boundary energies, etc. We will outline a systematic model development process, which includes a rigorous approach to structural selection based on principal component analysis, as well as a differential evolution algorithm for optimizing the hyperparameters in the model fitting so that both the model error and the property prediction error can be simultaneously lowered. We expect that this newly developed Mo SNAP model will find broad applications in large-scale, long-time scale simulations.
Material scientists are increasingly adopting the use of machine learning (ML) for making potentially important decisions, such as, discovery, development, optimization, synthesis and characterization of materials. However, despite MLs impressive performance in commercial applications, several unique challenges exist when applying ML in materials science applications. In such a context, the contributions of this work are twofold. First, we identify common pitfalls of existing ML techniques when learning from underrepresented/imbalanced material data. Specifically, we show that with imbalanced data, standard methods for assessing quality of ML models break down and lead to misleading conclusions. Furthermore, we found that the models own confidence score cannot be trusted and model introspection methods (using simpler models) do not help as they result in loss of predictive performance (reliability-explainability trade-off). Second, to overcome these challenges, we propose a general-purpose explainable and reliable machine-learning framework. Specifically, we propose a novel pipeline that employs an ensemble of simpler models to reliably predict material properties. We also propose a transfer learning technique and show that the performance loss due to models simplicity can be overcome by exploiting correlations among different material properties. A new evaluation metric and a trust score to better quantify the confidence in the predictions are also proposed. To improve the interpretability, we add a rationale generator component to our framework which provides both model-level and decision-level explanations. Finally, we demonstrate the versatility of our technique on two applications: 1) predicting properties of crystalline compounds, and 2) identifying novel potentially stable solar cell materials.
As data science and machine learning methods are taking on an increasingly important role in the materials research community, there is a need for the development of machine learning software tools that are easy to use (even for nonexperts with no programming ability), provide flexible access to the most important algorithms, and codify best practices of machine learning model development and evaluation. Here, we introduce the Materials Simulation Toolkit for Machine Learning (MAST-ML), an open source Python-based software package designed to broaden and accelerate the use of machine learning in materials science research. MAST-ML provides predefined routines for many input setup, model fitting, and post-analysis tasks, as well as a simple structure for executing a multi-step machine learning model workflow. In this paper, we describe how MAST-ML is used to streamline and accelerate the execution of machine learning problems. We walk through how to acquire and run MAST-ML, demonstrate how to execute different components of a supervised machine learning workflow via a customized input file, and showcase a number of features and analyses conducted automatically during a MAST-ML run. Further, we demonstrate the utility of MAST-ML by showcasing examples of recent materials informatics studies which used MAST-ML to formulate and evaluate various machine learning models for an array of materials applications. Finally, we lay out a vision of how MAST-ML, together with complementary software packages and emerging cyberinfrastructure, can advance the rapidly growing field of materials informatics, with a focus on producing machine learning models easily, reproducibly, and in a manner that facilitates model evolution and improvement in the future.
Computational study of molecules and materials from first principles is a cornerstone of physics, chemistry, and materials science, but limited by the cost of accurate and precise simulations. In settings involving many simulations, machine learning can reduce these costs, often by orders of magnitude, by interpolating between reference simulations. This requires representations that describe any molecule or material and support interpolation. We comprehensively review and discuss current representations and relations between them, using a unified mathematical framework based on many-body functions, group averaging, and tensor products. For selected state-of-the-art representations, we compare energy predictions for organic molecules, binary alloys, and Al-Ga-In sesquioxides in numerical experiments controlled for data distribution, regression method, and hyper-parameter optimization.
Electronic-structure theory is a strong pillar of materials science. Many different computer codes that employ different approaches are used by the community to solve various scientific problems. Still, the precision of different packages has only recently been scrutinized thoroughly, focusing on a specific task, namely selecting a popular density functional, and using unusually high, extremely precise numerical settings for investigating 71 monoatomic crystals. Little is known, however, about method- and code-specific uncertainties that arise under numerical settings that are commonly used in practice. We shed light on this issue by investigating the deviations in total and relative energies as a function of computational parameters. Using typical settings for basis sets and k-grids, we compare results for 71 elemental and 63 binary solids obtained by three different electronic-structure codes that employ fundamentally different strategies. On the basis of the observed trends, we propose a simple, analytical model for the estimation of the errors associated with the basis-set incompleteness. We cross-validate this model using ternary systems obtained from the NOMAD Repository and discuss how our approach enables the comparison of the heterogeneous data present in computational materials databases.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا