Do you want to publish a course? Click here

Embedded Atom Neural Network Potentials: Efficient and Accurate Machine Learning with a Physically Inspired Representation

171   0   0.0 ( 0 )
 Added by Yaolong Zhang
 Publication date 2019
  fields Physics
and research's language is English




Ask ChatGPT about the research

We propose a simple, but efficient and accurate machine learning (ML) model for developing high-dimensional potential energy surface. This so-called embedded atom neural network (EANN) approach is inspired by the well-known empirical embedded atom method (EAM) model used in condensed phase. It simply replaces the scalar embedded atom density in EAM with a Gaussian-type orbital based density vector, and represents the complex relationship between the embedded density vector and atomic energy by neural networks. We demonstrate that the EANN approach is equally accurate as several established ML models in representing both big molecular and extended periodic systems, yet with much fewer parameters and configurations. It is highly efficient as it implicitly contains the three-body information without an explicit sum of the conventional costly angular descriptors. With high accuracy and efficiency, EANN potentials can vastly accelerate molecular dynamics and spectroscopic simulations in complex systems at ab initio level.



rate research

Read More

104 - Yaolong Zhang , Junfan Xia , 2021
Recent advances in machine-learned interatomic potentials largely benefit from the atomistic representation and locally invariant many-body descriptors. It was however recently argued that including three- (or even four-) body features is incomplete to distinguish specific local structures. Utilizing an embedded density descriptor made by linear combinations of neighboring atomic orbitals and realizing that each orbital coefficient physically depends on its own local environment, we propose a recursively embedded atom neural network model. We formally prove that this model can efficiently incorporate complete many-body correlations without explicitly computing high-order terms. This model not only successfully addresses challenges regarding local completeness and nonlocality in representative systems, but also provides an easy and general way to update local many-body descriptors to have a message-passing form without changing their basic structures.
Machine learning techniques allow a direct mapping of atomic positions and nuclear charges to the potential energy surface with almost ab-initio accuracy and the computational efficiency of empirical potentials. In this work we propose a machine learning method for constructing high-dimensional potential energy surfaces based on feed-forward neural networks. As input to the neural network we propose an extendable invariant local molecular descriptor constructed from geometric moments. Their formulation via pairwise distance vectors and tensor contractions allows a very efficient implementation on graphical processing units (GPUs). The atomic species is encoded in the molecular descriptor, which allows the restriction to one neural network for the training of all atomic species in the data set. We demonstrate that the accuracy of the developed approach in representing both chemical and configurational spaces is comparable to the one of several established machine learning models. Due to its high accuracy and efficiency, the proposed machine-learned potentials can be used for any further tasks, for example the optimization of molecular geometries, the calculation of rate constants or molecular dynamics.
Machine learning has revolutionized the high-dimensional representations for molecular properties such as potential energy. However, there are scarce machine learning models targeting tensorial properties, which are rotationally covariant. Here, we propose tensorial neural network (NN) models to learn both tensorial response and transition properties, in which atomic coordinate vectors are multiplied with scalar NN outputs or their derivatives to preserve the rotationally covariant symmetry. This strategy keeps structural descriptors symmetry invariant so that the resulting tensorial NN models are as efficient as their scalar counterparts. We validate the performance and universality of this approach by learning response properties of water oligomers and liquid water, and transition dipole moment of a model structural unit of proteins. Machine learned tensorial models have enabled efficient simulations of vibrational spectra of liquid water and ultraviolet spectra of realistic proteins, promising feasible and accurate spectroscopic simulations for biomolecules and materials.
The applications of machine learning techniques to chemistry and materials science become more numerous by the day. The main challenge is to devise representations of atomic systems that are at the same time complete and concise, so as to reduce the number of reference calculations that are needed to predict the properties of different types of materials reliably. This has led to a proliferation of alternative ways to convert an atomic structure into an input for a machine-learning model. We introduce an abstract definition of chemical environments that is based on a smoothed atomic density, using a bra-ket notation to emphasize basis set independence and to highlight the connections with some popular choices of representations for describing atomic systems. The correlations between the spatial distribution of atoms and their chemical identities are computed as inner products between these feature kets, which can be given an explicit representation in terms of the expansion of the atom density on orthogonal basis functions, that is equivalent to the smooth overlap of atomic positions (SOAP) power spectrum, but also in real space, corresponding to $n$-body correlations of the atom density. This formalism lays the foundations for a more systematic tuning of the behavior of the representations, by introducing operators that represent the correlations between structure, composition, and the target properties. It provides a unifying picture of recent developments in the field and indicates a way forward towards more effective and computationally affordable machine-learning schemes for molecules and materials.
We apply the atom-atom potentials to molecular crystals of iron (II) complexes with bulky organic ligands. The crystals under study are formed by low-spin or high-spin molecules of Fe(phen)$_{2}$(NCS)$_{2}$ (phen = 1,10-phenanthroline), Fe(btz)$_{2}$(NCS)$_{2}$ (btz = 5,5$^{prime }$,6,6$^{prime}$-tetrahydro-4textit{H},4$^{prime}$textit{H}-2,2$^{prime }$-bi-1,3-thiazine), and Fe(bpz)$_{2}$(bipy) (bpz = dihydrobis(1-pyrazolil)borate, and bipy = 2,2$^{prime}$-bipyridine). All molecular geometries are taken from the X-ray experimental data and assumed to be frozen. The unit cell dimensions and angles, positions of the centers of masses of molecules, and the orientations of molecules corresponding to the minimum energy at 1 atm and 1 GPa are calculated. The optimized crystal structures are in a good agreement with the experimental data. Sources of the residual discrepancies between the calculated and experimental structures are discussed. The intermolecular contributions to the enthalpy of the spin transitions are found to be comparable with its total experimental values. It demonstrates that the method of atom-atom potentials is very useful for modeling organometalic crystals undergoing the spin transitions.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا