Do you want to publish a course? Click here

Active Learning A Neural Network Model For Gold Clusters & Bulk From Sparse First Principles Training Data

76   0   0.0 ( 0 )
 Added by Sukriti Manna
 Publication date 2020
  fields Physics
and research's language is English




Ask ChatGPT about the research

Small metal clusters are of fundamental scientific interest and of tremendous significance in catalysis. These nanoscale clusters display diverse geometries and structural motifs depending on the cluster size; a knowledge of this size-dependent structural motifs and their dynamical evolution has been of longstanding interest. Classical MD typically employ predefined functional forms which limits their ability to capture such complex size-dependent structural and dynamical transformation. Neural Network (NN) based potentials represent flexible alternatives and in principle, well-trained NN potentials can provide high level of flexibility, transferability and accuracy on-par with the reference model used for training. A major challenge, however, is that NN models are interpolative and requires large quantities of training data to ensure that the model adequately samples the energy landscape both near and far-from-equilibrium. Here, we introduce an active learning (AL) scheme that trains a NN model on-the-fly with minimal amount of first-principles based training data. Our AL workflow is initiated with a sparse training dataset (1 to 5 data points) and is updated on-the-fly via a Nested Ensemble Monte Carlo scheme that iteratively queries the energy landscape in regions of failure and updates the training pool to improve the network performance. Using a representative system of gold clusters, we demonstrate that our AL workflow can train a NN with ~500 total reference calculations. Our NN predictions are within 30 meV/atom and 40 meV/AA of the reference DFT calculations. Moreover, our AL-NN model also adequately captures the various size-dependent structural and dynamical properties of gold clusters in excellent agreement with DFT calculations and available experiments.



rate research

Read More

We introduce a coarse-grained deep neural network model (CG-DNN) for liquid water that utilizes 50 rotational and translational invariant coordinates, and is trained exclusively against energies of ~30,000 bulk water configurations. Our CG-DNN potential accurately predicts both the energies and molecular forces of water; within 0.9 meV/molecule and 54 meV/angstrom of a reference (coarse-grained bond-order potential) model. The CG-DNN water model also provides good prediction of several structural, thermodynamic, and temperature dependent properties of liquid water, with values close to that obtained from the reference model. More importantly, CG-DNN captures the well-known density anomaly of liquid water observed in experiments. Our work lays the groundwork for a scheme where existing empirical water models can be utilized to develop fully flexible neural network framework that can subsequently be trained against sparse data from high-fidelity albeit expensive beyond-DFT calculations.
Abstract Machine learning models, trained on data from ab initio quantum simulations, are yielding molecular dynamics potentials with unprecedented accuracy. One limiting factor is the quantity of available training data, which can be expensive to obtain. A quantum simulation often provides all atomic forces, in addition to the total energy of the system. These forces provide much more information than the energy alone. It may appear that training a model to this large quantity of force data would introduce significant computational costs. Actually, training to all available force data should only be a few times more expensive than training to energies alone. Here, we present a new algorithm for efficient force training, and benchmark its accuracy by training to forces from real-world datasets for organic chemistry and bulk aluminum.
Prediction of material properties from first principles is often a computationally expensive task. Recently, artificial neural networks and other machine learning approaches have been successfully employed to obtain accurate models at a low computational cost by leveraging existing example data. Here, we present a software package Properties from Artificial Neural Network Architectures (PANNA) that provides a comprehensive toolkit for creating neural network models for atomistic systems. Besides the core routines for neural network training, it includes data parser, descriptor builder and force-field generator suitable for integration within molecular dynamics packages. PANNA offers a variety of activation and cost functions, regularization methods, as well as the possibility of using fully-connected networks with custom size for each atomic species. PANNA benefits from the optimization and hardware-flexibility of the underlying TensorFlow engine which allows it to be used on multiple CPU/GPU/TPU systems, making it possible to develop and optimize neural network models based on large datasets.
Structural, electronic, vibrational and dielectric properties of LaBGeO$_5$ with the stillwellite structure are determined based on textit{ab initio} density functional theory. The theoretically relaxed structure is found to agree well with the existing experimental data with a deviation of less than $0.2%$. Both the density of states and the electronic band structure are calculated, showing five distinct groups of valence bands. Furthermore, the Born effective charge, the dielectric permittivity tensors, and the vibrational frequencies at the center of the Brillouin zone are all obtained. Compared to existing model calculations, the vibrational frequencies are found in much better agreement with the published experimental infrared and Raman data, with absolute and relative rms values of 6.04 cm$^{-1}$, and $1.81%$, respectively. Consequently, numerical values for both the parallel and perpendicular components of the permittivity tensor are established as 3.55 and 3.71 (10.34 and 12.28), respectively, for the high-(low-)frequency limit.
We give a detailed presentation of the theory and numerical implementation of an expression for the adiabatic energy flux in extended systems, derived from density-functional theory. This expression can be used to estimate the heat conductivity from equilibrium ab initio molecular dynamics, using the Green-Kubo linear response theory of transport coefficients. Our expression is implemented in an open-source component of the QE suite of computer codes for quantum mechanical materials modelling, which is being made publicly available.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا