No Arabic abstract
Graph neural networks (GNN) have been shown to provide substantial performance improvements for representing and modeling atomistic materials compared with descriptor-based machine-learning models. While most existing GNN models for atomistic predictions are based on atomic distance information, they do not explicitly incorporate bond angles, which are critical for distinguishing many atomic structures. Furthermore, many material properties are known to be sensitive to slight changes in bond angles. We present an Atomistic Line Graph Neural Network (ALIGNN), a GNN architecture that performs message passing on both the interatomic bond graph and its line graph corresponding to bond angles. We demonstrate that angle information can be explicitly and efficiently included, leading to improved performance on multiple atomistic prediction tasks. We use ALIGNN models for predicting 52 solid-state and molecular properties available in the JARVIS-DFT, Materials project, and QM9 databases. ALIGNN can outperform some previously reported GNN models on atomistic prediction tasks by up to 85 % in accuracy with better or comparable model training speed.
Machine Learning (ML) has the potential to accelerate discovery of new materials and shed light on useful properties of existing materials. A key difficulty when applying ML in Materials Science is that experimental datasets of material properties tend to be small. In this work we show how material descriptors can be learned from the structures present in large scale datasets of material simulations; and how these descriptors can be used to improve the prediction of an experimental property, the energy of formation of a solid. The material descriptors are learned by training a Graph Neural Network to regress simulated formation energies from a materials atomistic structure. Using these learned features for experimental property predictions outperforms existing methods that are based solely on chemical composition. Moreover, we find that the advantage of our approach increases as the generalization requirements of the task are made more stringent, for example when limiting the amount of training data or when generalizing to unseen chemical spaces.
Graph neural networks, trained on experimental or calculated data are becoming an increasingly important tool in computational materials science. Networks, once trained, are able to make highly accurate predictions at a fraction of the cost of experiments or first-principles calculations of comparable accuracy. However these networks typically rely on large databases of labelled experiments to train the model. In scenarios where data is scarce or expensive to obtain this can be prohibitive. By building a neural network that provides a confidence on the predicted properties, we are able to develop an active learning scheme that can reduce the amount of labelled data required, by identifying the areas of chemical space where the model is most uncertain. We present a scheme for coupling a graph neural network with a Gaussian process to featurise solid-state materials and predict properties textit{including} a measure of confidence in the prediction. We then demonstrate that this scheme can be used in an active learning context to speed up the training of the model, by selecting the optimal next experiment for obtaining a data label. Our active learning scheme can double the rate at which the performance of the model on a test data set improves with additional data compared to choosing the next sample at random. This type of uncertainty quantification and active learning has the potential to open up new areas of materials science, where data are scarce and expensive to obtain, to the transformative power of graph neural networks.
Structure is the most basic and important property of crystalline solids; it determines directly or indirectly most materials characteristics. However, predicting crystal structure of solids remains a formidable and not fully solved problem. Standard theoretical tools for this task are computationally expensive and at times inaccurate. Here we present an alternative approach utilizing machine learning for crystal structure prediction. We developed a tool called Crystal Structure Prediction Network (CRYSPNet) that can predict the Bravais lattice, space group, and lattice parameters of an inorganic material based only on its chemical composition. CRYSPNet consists of a series of neural network models, using as inputs predictors aggregating the properties of the elements constituting the compound. It was trained and validated on more than 100,000 entries from the Inorganic Crystal Structure Database. The tool demonstrates robust predictive capability and outperforms alternative strategies by a large margin. Made available to the public (at https://github.com/AuroraLHT/cryspnet), it can be used both as an independent prediction engine or as a method to generate candidate structures for further computational and/or experimental validation.
Data science and informatics tools have been proliferating recently within the computational materials science and catalysis fields. This proliferation has spurned the creation of various frameworks for automated materials screening, discovery, and design. Underpinning these frameworks are surrogate models with uncertainty estimates on their predictions. These uncertainty estimates are instrumental for determining which materials to screen next, but the computational catalysis field does not yet have a standard procedure for judging the quality of such uncertainty estimates. Here we present a suite of figures and performance metrics derived from the machine learning community that can be used to judge the quality of such uncertainty estimates. This suite probes the accuracy, calibration, and sharpness of a model quantitatively. We then show a case study where we judge various methods for predicting density-functional-theory-calculated adsorption energies. Of the methods studied here, we find that the best performer is a model where a convolutional neural network is used to supply features to a Gaussian process regressor, which then makes predictions of adsorption energies along with corresponding uncertainty estimates.
Recent application of neural networks (NNs) to modeling interatomic interactions has shown the learning machines encouragingly accurate performance for select elemental and multicomponent systems. In this study, we explore the possibility of building a library of NN-based models by introducing a hierarchical NN training. In such a stratified procedure NNs for multicomponent systems are obtained by sequential training from the bottom up: first unaries, then binaries, and so on. Advantages of constructing NN sets with shared parameters include acceleration of the training process and intact description of the constituent systems. We use an automated generation of diverse structure sets for NN training on density functional theory-level reference energies. In the test case of Cu, Pd, Ag, Cu-Pd, Cu-Ag, Pd-Ag, and Cu-Pd-Ag systems, NNs trained in the traditional and stratified fashions are found to have essentially identical accuracy for defect energies, phonon dispersions, formation energies, etc. The models robustness is further illustrated via unconstrained evolutionary structure searches in which the NN is used for the local optimization of crystal unit cells.