ترغب بنشر مسار تعليمي؟ اضغط هنا

Neuroevolution machine learning potentials: Combining high accuracy and low cost in atomistic simulations and application to heat transport

83   0   0.0 ( 0 )
 نشر من قبل Zheyong Fan
 تاريخ النشر 2021
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

We develop a neuroevolution-potential (NEP) framework for generating neural network based machine-learning potentials. They are trained using an evolutionary strategy for performing large-scale molecular dynamics (MD) simulations. A descriptor of the atomic environment is constructed based on Chebyshev and Legendre polynomials. The method is implemented in graphic processing units within the open-source GPUMD package, which can attain a computational speed over $10^7$ atom-step per second using one Nvidia Tesla V100. Furthermore, per-atom heat current is available in NEP, which paves the way for efficient and accurate MD simulations of heat transport in materials with strong phonon anharmonicity or spatial disorder, which usually cannot be accurately treated either with traditional empirical potentials or with perturbative methods.



قيم البحث

اقرأ أيضاً

97 - Yaolong Zhang , Ce Hu , 2020
Machine learning methods have nowadays become easy-to-use tools for constructing high-dimensional interatomic potentials with ab initio accuracy. Although machine learned interatomic potentials are generally orders of magnitude faster than first-prin ciples calculations, they remain much slower than classical force fields, at the price of using more complex structural descriptors. To bridge this efficiency gap, we propose an embedded atom neural network approach with simple piecewise switching function based descriptors, resulting in a favorable linear scaling with the number of neighbor atoms. Numerical examples validate that this piecewise machine learning model can be over an order of magnitude faster than various popular machine learned potentials with comparable accuracy for both metallic and covalent materials, approaching the speed of the fastest embedded atom method (i.e. several {mu}s/atom per CPU core). The extreme efficiency of this approach promises its potential in first-principles atomistic simulations of very large systems and/or in long timescale.
Glass transition temperature ($T_{text{g}}$) plays an important role in controlling the mechanical and thermal properties of a polymer. Polyimides are an important category of polymers with wide applications because of their superior heat resistance and mechanical strength. The capability of predicting $T_{text{g}}$ for a polyimide $a~priori$ is therefore highly desirable in order to expedite the design and discovery of new polyimide polymers with targeted properties and applications. Here we explore three different approaches to either compute $T_{text{g}}$ for a polyimide via all-atom molecular dynamics (MD) simulations or predict $T_{text{g}}$ via a mathematical model generated by using machine-learning algorithms to analyze existing data collected from literature. Our simulations reveal that $T_{text{g}}$ can be determined from examining the diffusion coefficient of simple gas molecules in a polyimide as a function of temperature and the results are comparable to those derived from data on polymer density versus temperature and actually closer to the available experimental data. Furthermore, the predictive model of $T_{text{g}}$ derived with machine-learning algorithms can be used to estimate $T_{text{g}}$ successfully within an uncertainty of about 20 degrees, even for polyimides yet to be synthesized experimentally.
Abstract Machine learning models, trained on data from ab initio quantum simulations, are yielding molecular dynamics potentials with unprecedented accuracy. One limiting factor is the quantity of available training data, which can be expensive to ob tain. A quantum simulation often provides all atomic forces, in addition to the total energy of the system. These forces provide much more information than the energy alone. It may appear that training a model to this large quantity of force data would introduce significant computational costs. Actually, training to all available force data should only be a few times more expensive than training to energies alone. Here, we present a new algorithm for efficient force training, and benchmark its accuracy by training to forces from real-world datasets for organic chemistry and bulk aluminum.
Machine Learning (ML) is one of the most exciting and dynamic areas of modern research and application. The purpose of this review is to provide an introduction to the core concepts and tools of machine learning in a manner easily understood and intu itive to physicists. The review begins by covering fundamental concepts in ML and modern statistics such as the bias-variance tradeoff, overfitting, regularization, generalization, and gradient descent before moving on to more advanced topics in both supervised and unsupervised learning. Topics covered in the review include ensemble models, deep learning and neural networks, clustering and data visualization, energy-based models (including MaxEnt models and Restricted Boltzmann Machines), and variational methods. Throughout, we emphasize the many natural connections between ML and statistical physics. A notable aspect of the review is the use of Python Jupyter notebooks to introduce modern ML/statistical packages to readers using physics-inspired datasets (the Ising Model and Monte-Carlo simulations of supersymmetric decays of proton-proton collisions). We conclude with an extended outlook discussing possible uses of machine learning for furthering our understanding of the physical world as well as open problems in ML where physicists may be able to contribute. (Notebooks are available at https://physics.bu.edu/~pankajm/MLnotebooks.html )
Neuroevolution, a field that draws inspiration from the evolution of brains in nature, harnesses evolutionary algorithms to construct artificial neural networks. It bears a number of intriguing capabilities that are typically inaccessible to gradient -based approaches, including optimizing neural-network architectures, hyperparameters, and even learning the training rules. In this paper, we introduce a quantum neuroevolution algorithm that autonomously finds near-optimal quantum neural networks for different machine learning tasks. In particular, we establish a one-to-one mapping between quantum circuits and directed graphs, and reduce the problem of finding the appropriate gate sequences to a task of searching suitable paths in the corresponding graph as a Markovian process. We benchmark the effectiveness of the introduced algorithm through concrete examples including classifications of real-life images and symmetry-protected topological states. Our results showcase the vast potential of neuroevolution algorithms in quantum machine learning, which would boost the exploration towards quantum learning supremacy with noisy intermediate-scale quantum devices.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا