ترغب بنشر مسار تعليمي؟ اضغط هنا

PANNA: Properties from Artificial Neural Network Architectures

161   0   0.0 ( 0 )
 نشر من قبل Emine Kucukbenli
 تاريخ النشر 2019
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Prediction of material properties from first principles is often a computationally expensive task. Recently, artificial neural networks and other machine learning approaches have been successfully employed to obtain accurate models at a low computational cost by leveraging existing example data. Here, we present a software package Properties from Artificial Neural Network Architectures (PANNA) that provides a comprehensive toolkit for creating neural network models for atomistic systems. Besides the core routines for neural network training, it includes data parser, descriptor builder and force-field generator suitable for integration within molecular dynamics packages. PANNA offers a variety of activation and cost functions, regularization methods, as well as the possibility of using fully-connected networks with custom size for each atomic species. PANNA benefits from the optimization and hardware-flexibility of the underlying TensorFlow engine which allows it to be used on multiple CPU/GPU/TPU systems, making it possible to develop and optimize neural network models based on large datasets.



قيم البحث

اقرأ أيضاً

Small metal clusters are of fundamental scientific interest and of tremendous significance in catalysis. These nanoscale clusters display diverse geometries and structural motifs depending on the cluster size; a knowledge of this size-dependent struc tural motifs and their dynamical evolution has been of longstanding interest. Classical MD typically employ predefined functional forms which limits their ability to capture such complex size-dependent structural and dynamical transformation. Neural Network (NN) based potentials represent flexible alternatives and in principle, well-trained NN potentials can provide high level of flexibility, transferability and accuracy on-par with the reference model used for training. A major challenge, however, is that NN models are interpolative and requires large quantities of training data to ensure that the model adequately samples the energy landscape both near and far-from-equilibrium. Here, we introduce an active learning (AL) scheme that trains a NN model on-the-fly with minimal amount of first-principles based training data. Our AL workflow is initiated with a sparse training dataset (1 to 5 data points) and is updated on-the-fly via a Nested Ensemble Monte Carlo scheme that iteratively queries the energy landscape in regions of failure and updates the training pool to improve the network performance. Using a representative system of gold clusters, we demonstrate that our AL workflow can train a NN with ~500 total reference calculations. Our NN predictions are within 30 meV/atom and 40 meV/AA of the reference DFT calculations. Moreover, our AL-NN model also adequately captures the various size-dependent structural and dynamical properties of gold clusters in excellent agreement with DFT calculations and available experiments.
We introduce a coarse-grained deep neural network model (CG-DNN) for liquid water that utilizes 50 rotational and translational invariant coordinates, and is trained exclusively against energies of ~30,000 bulk water configurations. Our CG-DNN potent ial accurately predicts both the energies and molecular forces of water; within 0.9 meV/molecule and 54 meV/angstrom of a reference (coarse-grained bond-order potential) model. The CG-DNN water model also provides good prediction of several structural, thermodynamic, and temperature dependent properties of liquid water, with values close to that obtained from the reference model. More importantly, CG-DNN captures the well-known density anomaly of liquid water observed in experiments. Our work lays the groundwork for a scheme where existing empirical water models can be utilized to develop fully flexible neural network framework that can subsequently be trained against sparse data from high-fidelity albeit expensive beyond-DFT calculations.
Module for ab initio structure evolution (MAISE) is an open-source package for materials modeling and prediction. The codes main feature is an automated generation of neural network (NN) interatomic potentials for use in global structure searches. Th e systematic construction of Behler-Parrinello-type NN models approximating ab initio energy and forces relies on two approaches introduced in our recent studies. An evolutionary sampling scheme for generating reference structures improves the NNs mapping of regions visited in unconstrained searches, while a stratified training approach enables the creation of standardized NN models for multiple elements. A more flexible NN architecture proposed here expands the applicability of the stratified scheme for an arbitrary number of elements. The full workflow in the NN development is managed with a customizable MAISE-NET wrapper written in Python. The global structure optimization capability in MAISE is based on an evolutionary algorithm applicable for nanoparticles, films, and bulk crystals. A multitribe extension of the algorithm allows for an efficient simultaneous optimization of nanoparticles in a given size range. Implemented structure analysis functions include fingerprinting with radial distribution functions and finding space groups with the SPGLIB tool. This work overviews MAISEs available features, constructed models, and confirmed predictions.
Structural, electronic, vibrational and dielectric properties of LaBGeO$_5$ with the stillwellite structure are determined based on textit{ab initio} density functional theory. The theoretically relaxed structure is found to agree well with the exist ing experimental data with a deviation of less than $0.2%$. Both the density of states and the electronic band structure are calculated, showing five distinct groups of valence bands. Furthermore, the Born effective charge, the dielectric permittivity tensors, and the vibrational frequencies at the center of the Brillouin zone are all obtained. Compared to existing model calculations, the vibrational frequencies are found in much better agreement with the published experimental infrared and Raman data, with absolute and relative rms values of 6.04 cm$^{-1}$, and $1.81%$, respectively. Consequently, numerical values for both the parallel and perpendicular components of the permittivity tensor are established as 3.55 and 3.71 (10.34 and 12.28), respectively, for the high-(low-)frequency limit.
In this paper we propose a Bayesian method for estimating architectural parameters of neural networks, namely layer size and network depth. We do this by learning concrete distributions over these parameters. Our results show that regular networks wi th a learnt structure can generalise better on small datasets, while fully stochastic networks can be more robust to parameter initialisation. The proposed method relies on standard neural variational learning and, unlike randomised architecture search, does not require a retraining of the model, thus keeping the computational overhead at minimum.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا