ترغب بنشر مسار تعليمي؟ اضغط هنا

Understanding Machine-learned Density Functionals

151   0   0.0 ( 0 )
 نشر من قبل Li Li
 تاريخ النشر 2014
والبحث باللغة English




اسأل ChatGPT حول البحث

Kernel ridge regression is used to approximate the kinetic energy of non-interacting fermions in a one-dimensional box as a functional of their density. The properties of different kernels and methods of cross-validation are explored, and highly accurate energies are achieved. Accurate {em constrained optimal densities} are found via a modified Euler-Lagrange constrained minimization of the total energy. A projected gradient descent algorithm is derived using local principal component analysis. Additionally, a sparse grid representation of the density can be used without degrading the performance of the methods. The implications for machine-learned density functional approximations are discussed.



قيم البحث

اقرأ أيضاً

Machine learning is used to approximate density functionals. For the model problem of the kinetic energy of non-interacting fermions in 1d, mean absolute errors below 1 kcal/mol on test densities similar to the training set are reached with fewer tha n 100 training densities. A predictor identifies if a test density is within the interpolation region. Via principal component analysis, a projected functional derivative finds highly accurate self-consistent densities. Challenges for application of our method to real electronic structure problems are discussed.
The homogeneous electron gas (HEG) is a key ingredient in the construction of most exchange-correlation functionals of density-functional theory. Often, the energy of the HEG is parameterized as a function of its spin density $n$, leading to the loca l density approximation (LDA) for inhomogeneous systems. However, the connection between the electron density and kinetic energy density of the HEG can be used to generalize the LDA by evaluating it on a weighted geometric average of the local spin density and the spin density of a HEG that has the local kinetic energy density of the inhomogeneous system, with a mixing ratio $x$. This leads to a new family of functionals that we term meta-local density approximations (meta-LDAs), which are still exact for the HEG, which are derived only from properties of the HEG, and which form a new rung of Jacobs ladder of density functionals. The first functional of this ladder, the local $tau$ approximation (LTA) of Ernzerhof and Scuseria that corresponds to $x=1$ is unfortunately not stable enough to be used in self-consistent field calculations, because it leads to divergent potentials as we show in this work. However, a geometric averaging of the LDA and LTA densities with smaller values of $x$ not only leads to numerical stability of the resulting functional, but also yields more accurate exchange energies in atomic calculations than the LDA, the LTA, or the tLDA functional ($x=1/4$) of Eich and Hellgren. We choose $x=0.50$ as it gives the best total energy in self-consistent exchange-only calculations for the argon atom. Atomization energy benchmarks confirm that the choice $x=0.50$ also yields improved energetics in combination with correlation functionals in molecules, almost eliminating the well-known overbinding of the LDA and reducing its error by two thirds.
Machine learning is a powerful tool to design accurate, highly non-local, exchange-correlation functionals for density functional theory. So far, most of those machine learned functionals are trained for systems with an integer number of particles. A s such, they are unable to reproduce some crucial and fundamental aspects, such as the explicit dependency of the functionals on the particle number or the infamous derivative discontinuity at integer particle numbers. Here we propose a solution to these problems by training a neural network as the universal functional of density-functional theory that (i) depends explicitly on the number of particles with a piece-wise linearity between the integer numbers and (ii) reproduces the derivative discontinuity of the exchange-correlation energy. This is achieved by using an ensemble formalism, a training set containing fractional densities, and an explicitly discontinuous formulation.
355 - Kevin Vu , John Snyder , Li Li 2015
Accurate approximations to density functionals have recently been obtained via machine learning (ML). By applying ML to a simple function of one variable without any random sampling, we extract the qualitative dependence of errors on hyperparameters. We find universal features of the behavior in extreme limits, including both very small and very large length scales, and the noise-free limit. We show how such features arise in ML models of density functionals.
Empirical fitting of parameters in approximate density functionals is common. Such fits conflate errors in the self-consistent density with errors in the energy functional, but density-corrected DFT (DC-DFT) separates these two. We illustrate with ca tastrophic failures of a toy functional applied to $H_2^+$ at varying bond lengths, where the standard fitting procedure misses the exact functional; Grimmes D3 fit to noncovalent interactions, which can be contaminated by large density errors such as in the WATER27 and B30 datasets; and double-hybrids trained on self-consistent densities, which can perform poorly on systems with density-driven errors. In these cases, more accurate results are found at no additional cost, by using Hartree-Fock (HF) densities instead of self-consistent densities. For binding energies of small water clusters, errors are greatly reduced. Range-separated hybrids with 100% HF at large distances suffer much less from this effect.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا