ترغب بنشر مسار تعليمي؟ اضغط هنا

Application of variational policy gradient to atomic-scale materials synthesis

269   0   0.0 ( 0 )
 نشر من قبل Rama K Vasudevan
 تاريخ النشر 2020
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Atomic-scale materials synthesis via layer deposition techniques present a unique opportunity to control material structures and yield systems that display unique functional properties that cannot be stabilized using traditional bulk synthetic routes. However, the deposition process itself presents a large, multidimensional space that is traditionally optimized via intuition and trial and error, slowing down progress. Here, we present an application of deep reinforcement learning to a simulated materials synthesis problem, utilizing the Stein variational policy gradient (SVPG) approach to train multiple agents to optimize a stochastic policy to yield desired functional properties. Our contributions are (1) A fully open source simulation environment for layered materials synthesis problems, utilizing a kinetic Monte-Carlo engine and implemented in the OpenAI Gym framework, (2) Extension of the Stein variational policy gradient approach to deal with both image and tabular input, and (3) Developing a parallel (synchronous) implementation of SVPG using Horovod, distributing multiple agents across GPUs and individual simulation environments on CPUs. We demonstrate the utility of this approach in optimizing for a material surface characteristic, surface roughness, and explore the strategies used by the agents as compared with a traditional actor-critic (A2C) baseline. Further, we find that SVPG stabilizes the training process over traditional A2C. Such trained agents can be useful to a variety of atomic-scale deposition techniques, including pulsed laser deposition and molecular beam epitaxy, if the implementation challenges are addressed.


قيم البحث

اقرأ أيضاً

Electronic nearsightedness is one of the fundamental principles governing the behavior of condensed matter and supporting its description in terms of local entities such as chemical bonds. Locality also underlies the tremendous success of machine-lea rning schemes that predict quantum mechanical observables -- such as the cohesive energy, the electron density, or a variety of response properties -- as a sum of atom-centred contributions, based on a short-range representation of atomic environments. One of the main shortcomings of these approaches is their inability to capture physical effects, ranging from electrostatic interactions to quantum delocalization, which have a long-range nature. Here we show how to build a multi-scale scheme that combines in the same framework local and non-local information, overcoming such limitations. We show that the simplest version of such features can be put in formal correspondence with a multipole expansion of permanent electrostatics. The data-driven nature of the model construction, however, makes this simple form suitable to tackle also different types of delocalized and collective effects. We present several examples that range from molecular physics, to surface science and biophysics, demonstrating the ability of this multi-scale approach to model interactions driven by electrostatics, polarization and dispersion, as well as the cooperative behavior of dielectric response functions.
As data science and machine learning methods are taking on an increasingly important role in the materials research community, there is a need for the development of machine learning software tools that are easy to use (even for nonexperts with no pr ogramming ability), provide flexible access to the most important algorithms, and codify best practices of machine learning model development and evaluation. Here, we introduce the Materials Simulation Toolkit for Machine Learning (MAST-ML), an open source Python-based software package designed to broaden and accelerate the use of machine learning in materials science research. MAST-ML provides predefined routines for many input setup, model fitting, and post-analysis tasks, as well as a simple structure for executing a multi-step machine learning model workflow. In this paper, we describe how MAST-ML is used to streamline and accelerate the execution of machine learning problems. We walk through how to acquire and run MAST-ML, demonstrate how to execute different components of a supervised machine learning workflow via a customized input file, and showcase a number of features and analyses conducted automatically during a MAST-ML run. Further, we demonstrate the utility of MAST-ML by showcasing examples of recent materials informatics studies which used MAST-ML to formulate and evaluate various machine learning models for an array of materials applications. Finally, we lay out a vision of how MAST-ML, together with complementary software packages and emerging cyberinfrastructure, can advance the rapidly growing field of materials informatics, with a focus on producing machine learning models easily, reproducibly, and in a manner that facilitates model evolution and improvement in the future.
Many machine learning strategies designed to automate mathematical tasks leverage neural networks to search large combinatorial spaces of mathematical symbols. In contrast to traditional evolutionary approaches, using a neural network at the core of the search allows learning higher-level symbolic patterns, providing an informed direction to guide the search. When no labeled data is available, such networks can still be trained using reinforcement learning. However, we demonstrate that this approach can suffer from an early commitment phenomenon and from initialization bias, both of which limit exploration. We present two exploration methods to tackle these issues, building upon ideas of entropy regularization and distribution initialization. We show that these techniques can improve the performance, increase sample efficiency, and lower the complexity of solutions for the task of symbolic regression.
128 - Akira Koyama 2020
The Wiener-Khinchin theorem for the Fourier-Laplace transformation (WKT-FLT) provides a robust method to calculate numerically single-side Fourier transforms of arbitrary autocorrelation functions from molecular simulations. However, the existing WKT -FLT equation produces two artifacts in the output of the frequency-domain relaxation function. In addition, these artifacts are more apparent in the frequency-domain response function converted from the relaxation function. We find the sources of these artifacts that are associated with the discretization of the WKT-FLT equation. Taking these sources into account, we derive the new discretized WKT-FLT equations designated for both the frequency-domain relaxation and response functions with the artifacts removed. The use of the discretized WKT-FLT equations is illustrated by a flow chart of an on-the-fly algorithm. We also give application examples of the discretized WKT-FLT equations for computing dynamic structure factor and wave-vector-dependent dynamic susceptibility from molecular simulations.
Many applications, especially in physics and other sciences, call for easily interpretable and robust machine learning techniques. We propose a fully gradient-based technique for training radial basis function networks with an efficient and scalable open-source implementation. We derive novel closed-form optimization criteria for pruning the models for continuous as well as binary data which arise in a challenging real-world material physics problem. The pruned models are optimized to provide compact and interpretab
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا