ﻻ يوجد ملخص باللغة العربية
One hidden yet important issue for developing neural network potentials (NNPs) is the choice of training algorithm. Here we compare the performance of two popular training algorithms, the adaptive moment estimation algorithm (Adam) and the extended Kalman filter algorithm (EKF), using the Behler-Parrinello neural network (BPNN) and two publicly accessible datasets of liquid water. It is found that NNPs trained with EKF are more transferable and less sensitive to the value of the learning rate, as compared to Adam. In both cases, error metrics of the test set do not always serve as a good indicator for the actual performance of NNPs. Instead, we show that their performance correlates well with a Fisher information based similarity measure.
Artificial neural network modeling does not need to consider the mechanism. It can map the implicit relationship between input and output and predict the performance of the system well. At the same time, it has the advantages of self-learning ability
Distillation process is a complex process of conduction, mass transfer and heat conduction, which is mainly manifested as follows: The mechanism is complex and changeable with uncertainty; the process is multivariate and strong coupling; the system i
Deep learning based methods have been widely applied to predict various kinds of molecular properties in the pharmaceutical industry with increasingly more success. Solvation free energy is an important index in the field of organic synthesis, medici
The recently published DeePMD model (https://github.com/deepmodeling/deepmd-kit), based on a deep neural network architecture, brings the hope of solving the time-scale issue which often prevents the application of first principle molecular dynamics
Deep learning techniques have opened a new venue for electronic structure theory in recent years. In contrast to traditional methods, deep neural networks provide much more expressive and flexible wave function ansatz, resulting in better accuracy an