No Arabic abstract
Structure is the most basic and important property of crystalline solids; it determines directly or indirectly most materials characteristics. However, predicting crystal structure of solids remains a formidable and not fully solved problem. Standard theoretical tools for this task are computationally expensive and at times inaccurate. Here we present an alternative approach utilizing machine learning for crystal structure prediction. We developed a tool called Crystal Structure Prediction Network (CRYSPNet) that can predict the Bravais lattice, space group, and lattice parameters of an inorganic material based only on its chemical composition. CRYSPNet consists of a series of neural network models, using as inputs predictors aggregating the properties of the elements constituting the compound. It was trained and validated on more than 100,000 entries from the Inorganic Crystal Structure Database. The tool demonstrates robust predictive capability and outperforms alternative strategies by a large margin. Made available to the public (at https://github.com/AuroraLHT/cryspnet), it can be used both as an independent prediction engine or as a method to generate candidate structures for further computational and/or experimental validation.
We developed a density functional theory-free approach for crystal structure prediction via combing graph network (GN) and Bayesian optimization (BO). GN is adopted to establish the correlation model between crystal structure and formation enthalpies. BO is to accelerate searching crystal structure with optimal formation enthalpy. The approach of combining GN and BO for crystal Structure Searching (GN-BOSS), in principle, can predict crystal structure at given chemical compositions without additional constraints on cell shapes and lattice symmetries. The applicability and efficiency of GN-BOSS approach is then verified via solving the classical Ph-vV challenge. It can correctly predict the crystal structures of 24 binary compounds from scratch with averaged computational cost ~ 30 minutes each by only one CPU core. GN-BOSS approach may open a new avenue to data-driven crystal structural prediction without using the expensive DFT calculations.
Graph neural networks (GNN) have been shown to provide substantial performance improvements for representing and modeling atomistic materials compared with descriptor-based machine-learning models. While most existing GNN models for atomistic predictions are based on atomic distance information, they do not explicitly incorporate bond angles, which are critical for distinguishing many atomic structures. Furthermore, many material properties are known to be sensitive to slight changes in bond angles. We present an Atomistic Line Graph Neural Network (ALIGNN), a GNN architecture that performs message passing on both the interatomic bond graph and its line graph corresponding to bond angles. We demonstrate that angle information can be explicitly and efficiently included, leading to improved performance on multiple atomistic prediction tasks. We use ALIGNN models for predicting 52 solid-state and molecular properties available in the JARVIS-DFT, Materials project, and QM9 databases. ALIGNN can outperform some previously reported GNN models on atomistic prediction tasks by up to 85 % in accuracy with better or comparable model training speed.
Functional properties of nanomaterials strongly depend on their surface atomic structure, but they often become largely different from their bulk structure, exhibiting surface reconstructions and relaxations. However, most of the surface characterization methods are either limited to 2-dimensional measurements or not reaching to true 3D atomic-scale resolution, and single-atom level determination of the 3D surface atomic structure for general 3D nanomaterials still remains elusive. Here we show the measurement of 3D atomic structure of a Pt nanoparticle at 15 pm precision, aided by a deep learning-based missing data retrieval. The surface atomic structure was reliably measured, and we find that <100> and <111> facets contribute differently to the surface strain, resulting in anisotropic strain distribution as well as compressive support boundary effect. The capability of single-atom level surface characterization will not only deepen our understanding of the functional properties of nanomaterials but also open a new door for fine tailoring of their performance.
To use neural networks in safety-critical settings it is paramount to provide assurances on their runtime operation. Recent work on ReLU networks has sought to verify whether inputs belonging to a bounded box can ever yield some undesirable output. Input-splitting procedures, a particular type of verification mechanism, do so by recursively partitioning the input set into smaller sets. The efficiency of these methods is largely determined by the number of splits the box must undergo before the property can be verified. In this work, we propose a new technique based on shadow prices that fully exploits the information of the problem yielding a more efficient generation of splits than the state-of-the-art. Results on the Airborne Collision Avoidance System (ACAS) benchmark verification tasks show a considerable reduction in the partitions generated which substantially reduces computation times. These results open the door to improved verification methods for a wide variety of machine learning applications including vision and control.
Wavelets are well known for data compression, yet have rarely been applied to the compression of neural networks. This paper shows how the fast wavelet transform can be used to compress linear layers in neural networks. Linear layers still occupy a significant portion of the parameters in recurrent neural networks (RNNs). Through our method, we can learn both the wavelet bases and corresponding coefficients to efficiently represent the linear layers of RNNs. Our wavelet compressed RNNs have significantly fewer parameters yet still perform competitively with the state-of-the-art on synthetic and real-world RNN benchmarks. Wavelet optimization adds basis flexibility, without large numbers of extra weights. Source code is available at https://github.com/v0lta/Wavelet-network-compression.