No Arabic abstract
As data science and machine learning methods are taking on an increasingly important role in the materials research community, there is a need for the development of machine learning software tools that are easy to use (even for nonexperts with no programming ability), provide flexible access to the most important algorithms, and codify best practices of machine learning model development and evaluation. Here, we introduce the Materials Simulation Toolkit for Machine Learning (MAST-ML), an open source Python-based software package designed to broaden and accelerate the use of machine learning in materials science research. MAST-ML provides predefined routines for many input setup, model fitting, and post-analysis tasks, as well as a simple structure for executing a multi-step machine learning model workflow. In this paper, we describe how MAST-ML is used to streamline and accelerate the execution of machine learning problems. We walk through how to acquire and run MAST-ML, demonstrate how to execute different components of a supervised machine learning workflow via a customized input file, and showcase a number of features and analyses conducted automatically during a MAST-ML run. Further, we demonstrate the utility of MAST-ML by showcasing examples of recent materials informatics studies which used MAST-ML to formulate and evaluate various machine learning models for an array of materials applications. Finally, we lay out a vision of how MAST-ML, together with complementary software packages and emerging cyberinfrastructure, can advance the rapidly growing field of materials informatics, with a focus on producing machine learning models easily, reproducibly, and in a manner that facilitates model evolution and improvement in the future.
The MAterials Simulation Toolkit (MAST) is a workflow manager and post-processing tool for ab initio defect and diffusion workflows. MAST codifies research knowledge and best practices for such workflows, and allows for the generation and management of easily modified and reproducible workflows, where data is stored along with workflow information for data provenance tracking. MAST is open-source and available for download (see PDF for links).
In this work, we present a highly accurate spectral neighbor analysis potential (SNAP) model for molybdenum (Mo) developed through the rigorous application of machine learning techniques on large materials data sets. Despite Mos importance as a structural metal, existing force fields for Mo based on the embedded atom and modified embedded atom methods still do not provide satisfactory accuracy on many properties. We will show that by fitting to the energies, forces and stress tensors of a large density functional theory (DFT)-computed dataset on a diverse set of Mo structures, a Mo SNAP model can be developed that achieves close to DFT accuracy in the prediction of a broad range of properties, including energies, forces, stresses, elastic constants, melting point, phonon spectra, surface energies, grain boundary energies, etc. We will outline a systematic model development process, which includes a rigorous approach to structural selection based on principal component analysis, as well as a differential evolution algorithm for optimizing the hyperparameters in the model fitting so that both the model error and the property prediction error can be simultaneously lowered. We expect that this newly developed Mo SNAP model will find broad applications in large-scale, long-time scale simulations.
We describe an open-source toolkit for neural machine translation (NMT). The toolkit prioritizes efficiency, modularity, and extensibility with the goal of supporting NMT research into model architectures, feature representations, and source modalities, while maintaining competitive performance and reasonable training requirements. The toolkit consists of modeling and translation support, as well as detailed pedagogical documentation about the underlying techniques.
Existing works, including ELMO and BERT, have revealed the importance of pre-training for NLP tasks. While there does not exist a single pre-training model that works best in all cases, it is of necessity to develop a framework that is able to deploy various pre-training models efficiently. For this purpose, we propose an assemble-on-demand pre-training toolkit, namely Universal Encoder Representations (UER). UER is loosely coupled, and encapsulated with rich modules. By assembling modules on demand, users can either reproduce a state-of-the-art pre-training model or develop a pre-training model that remains unexplored. With UER, we have built a model zoo, which contains pre-trained models based on different corpora, encoders, and targets (objectives). With proper pre-trained models, we could achieve new state-of-the-art results on a range of downstream datasets.
Familia is an open-source toolkit for pragmatic topic modeling in industry. Familia abstracts the utilities of topic modeling in industry as two paradigms: semantic representation and semantic matching. Efficient implementations of the two paradigms are made publicly available for the first time. Furthermore, we provide off-the-shelf topic models trained on large-scale industrial corpora, including Latent Dirichlet Allocation (LDA), SentenceLDA and Topical Word Embedding (TWE). We further describe typical applications which are successfully powered by topic modeling, in order to ease the confusions and difficulties of software engineers during topic model selection and utilization.