Improving Molecular Force Fields Across Configurational Space by Combining Supervised and Unsupervised Machine Learning


Abstract in English

The training set of atomic configurations is key to the performance of any Machine Learning Force Field (MLFF) and, as such, the training set selection determines the applicability of the MLFF model for predictive molecular simulations. However, most atomistic reference datasets are inhomogeneously distributed across configurational space (CS), thus choosing the training set randomly or according to the probability distribution of the data leads to models whose accuracy is mainly defined by the most common close-to-equilibrium configurations in the reference data. In this work, we combine unsupervised and supervised ML methods to bypass the inherent bias of the data for common configurations, effectively widening the applicability range of MLFF to the fullest capabilities of the dataset. To achieve this goal, we first cluster the CS into subregions similar in terms of geometry and energetics. We iteratively test a given MLFF performance on each subregion and fill the training set of the model with the representatives of the most inaccurate parts of the CS. The proposed approach has been applied to a set of small organic molecules and alanine tetrapeptide, demonstrating an up to two-fold decrease in the root mean squared errors for force predictions of these molecules. This result holds for both kernel-based methods (sGDML and GAP/SOAP models) and deep neural networks (SchNet model). For the latter, the developed approach simultaneously improves both energy and forces, bypassing the compromise to be made when employing mixed energy/force loss functions.

Download