No Arabic abstract
Machine learning based interatomic potential energy surface (PES) models are revolutionizing the field of molecular modeling. However, although much faster than electronic structure schemes, these models suffer from a lower efficiency as compared to typical empirical force fields due to more sophisticated computations involved. Herein, we report a model compression scheme for boosting the performance of the Deep Potential (DP) model, a deep learning based PES model. This scheme, we call DP Compress, is an efficient post-processing step after the training of DP models (DP Train). DP Compress combines several DP-specific compression techniques, which typically speed up DP- based molecular dynamics simulations by an order of magnitude faster, and consume an order of magnitude less memory. We demonstrate that DP Compress is sufficiently accurate by testing a variety of physical properties of Cu, H2O, and Al-Cu-Mg systems. DP Compress applies to both CPU and GPU machines and is publicly available at https://github.com/deepmodeling/deepmd-kit.
In this work, we propose an effective scheme (called DP-Net) for compressing the deep neural networks (DNNs). It includes a novel dynamic programming (DP) based algorithm to obtain the optimal solution of weight quantization and an optimization process to train a clustering-friendly DNN. Experiments showed that the DP-Net allows larger compression than the state-of-the-art counterparts while preserving accuracy. The largest 77X compression ratio on Wide ResNet is achieved by combining DP-Net with other compression techniques. Furthermore, the DP-Net is extended for compressing a robust DNN model with negligible accuracy loss. At last, a custom accelerator is designed on FPGA to speed up the inference computation with DP-Net.
In recent years, promising deep learning based interatomic potential energy surface (PES) models have been proposed that can potentially allow us to perform molecular dynamics simulations for large scale systems with quantum accuracy. However, making these models truly reliable and practically useful is still a very non-trivial task. A key component in this task is the generation of datasets used in model training. In this paper, we introduce the Deep Potential GENerator (DP-GEN), an open-source software platform that implements the recently proposed on-the-fly learning procedure [Phys. Rev. Materials 3, 023804] and is capable of generating uniformly accurate deep learning based PES models in a way that minimizes human intervention and the computational cost for data generation and model training. DP-GEN automatically and iteratively performs three steps: exploration, labeling, and training. It supports various popular packages for these three steps: LAMMPS for exploration, Quantum Espresso, VASP, CP2K, etc. for labeling, and DeePMD-kit for training. It also allows automatic job submission and result collection on different types of machines, such as high performance clusters and cloud machines, and is adaptive to different job management tools, including Slurm, PBS, and LSF. As a concrete example, we illustrate the details of the process for generating a general-purpose PES model for Cu using DP-GEN.
A comprehensive microscopic understanding of ambient liquid water is a major challenge for $ab$ $initio$ simulations as it simultaneously requires an accurate quantum mechanical description of the underlying potential energy surface (PES) as well as extensive sampling of configuration space. Due to the presence of light atoms (e.g., H or D), nuclear quantum fluctuations lead to observable changes in the structural properties of liquid water (e.g., isotope effects), and therefore provide yet another challenge for $ab$ $initio$ approaches. In this work, we demonstrate that the combination of dispersion-inclusive hybrid density functional theory (DFT), the Feynman discretized path-integral (PI) approach, and machine learning (ML) constitutes a versatile $ab$ $initio$ based framework that enables extensive sampling of both thermal and nuclear quantum fluctuations on a quite accurate underlying PES. In particular, we employ the recently developed deep potential molecular dynamics (DPMD) model---a neural-network representation of the $ab$ $initio$ PES---in conjunction with a PI approach based on the generalized Langevin equation (PIGLET) to investigate how isotope effects influence the structural properties of ambient liquid H$_2$O and D$_2$O. Through a detailed analysis of the interference differential cross sections as well as several radial and angular distribution functions, we demonstrate that this approach can furnish a semi-quantitative prediction of these subtle isotope effects.
The main result of this article is sub-additivity of the dp-rank. We also show that the study of theories of finite dp-rank can not be reduced to the study of its dp-minimal types, and discuss the possible relations between dp-rank and VC-density.
We propose a fast method for the calculation of short-range interactions in molecular dynamics simulations. The so-called random-batch list method is a stochastic version of the classical neighbor-list method to avoid the construction of a full Verlet list, which introduces two-level neighbor lists for each particle such that the neighboring particles are located in core and shell regions, respectively. Direct interactions are performed in the core region. For the shell zone, we employ a random batch of interacting particles to reduce the number of interaction pairs. The error estimate of the algorithm is provided. We investigate the Lennard-Jones fluid by molecular dynamics simulations, and show that this novel method can significantly accelerate the simulations with a factor of several fold without loss of the accuracy. This method is simple to implement, can be well combined with any linked cell methods to further speed up and scale up the simulation systems, and can be straightforwardly extended to other interactions such as Ewald short-range part, and thus it is promising for large-scale molecular dynamics simulations.