ترغب بنشر مسار تعليمي؟ اضغط هنا

Solving the electronic Schrodinger equation for multiple nuclear geometries with weight-sharing deep neural networks

200   0   0.0 ( 0 )
 نشر من قبل Rafael Reisenhofer
 تاريخ النشر 2021
والبحث باللغة English




اسأل ChatGPT حول البحث

Accurate numerical solutions for the Schrodinger equation are of utmost importance in quantum chemistry. However, the computational cost of current high-accuracy methods scales poorly with the number of interacting particles. Combining Monte Carlo methods with unsupervised training of neural networks has recently been proposed as a promising approach to overcome the curse of dimensionality in this setting and to obtain accurate wavefunctions for individual molecules at a moderately scaling computational cost. These methods currently do not exploit the regularity exhibited by wavefunctions with respect to their molecular geometries. Inspired by recent successful applications of deep transfer learning in machine translation and computer vision tasks, we attempt to leverage this regularity by introducing a weight-sharing constraint when optimizing neural network-based models for different molecular geometries. That is, we restrict the optimization process such that up to 95 percent of weights in a neural network model are in fact equal across varying molecular geometries. We find that this technique can accelerate optimization when considering sets of nuclear geometries of the same molecule by an order of magnitude and that it opens a promising route towards pre-trained neural network wavefunctions that yield high accuracy even across different molecules.

قيم البحث

اقرأ أيضاً

There has been a wave of interest in applying machine learning to study dynamical systems. In particular, neural networks have been applied to solve the equations of motion, and therefore, track the evolution of a system. In contrast to other applica tions of neural networks and machine learning, dynamical systems possess invariants such as energy, momentum, and angular momentum, depending on their underlying symmetries. Traditional numerical integration methods sometimes violate these conservation laws, propagating errors in time, ultimately reducing the predictability of the method. We present a data-free Hamiltonian neural network that solves the differential equations that govern dynamical systems. This is an equation-driven unsupervised learning method where the optimization process of the network depends solely on the predicted functions without using any ground truth data. This unsupervised model learns solutions that satisfy identically, up to an arbitrarily small error, Hamiltons equations and, therefore, conserve the Hamiltonian invariants. Once the network is optimized, the proposed architecture is considered a symplectic unit due to the introduction of an efficient parametric form of solutions. In addition, the choice of an appropriate activation function drastically improves the predictability of the network. An error analysis is derived and states that the numerical errors depend on the overall network performance. The symplectic architecture is then employed to solve the equations for the nonlinear oscillator and the chaotic Henon-Heiles dynamical system. In both systems, a symplectic Euler integrator requires two orders more evaluation points than the Hamiltonian network in order to achieve the same order of the numerical error in the predicted phase space trajectories.
The Fermionic Neural Network (FermiNet) is a recently-developed neural network architecture that can be used as a wavefunction Ansatz for many-electron systems, and has already demonstrated high accuracy on small systems. Here we present several impr ovements to the FermiNet that allow us to set new records for speed and accuracy on challenging systems. We find that increasing the size of the network is sufficient to reach chemical accuracy on atoms as large as argon. Through a combination of implementing FermiNet in JAX and simplifying several parts of the network, we are able to reduce the number of GPU hours needed to train the FermiNet on large systems by an order of magnitude. This enables us to run the FermiNet on the challenging transition of bicyclobutane to butadiene and compare against the PauliNet on the automerization of cyclobutadiene, and we achieve results near the state of the art for both.
Coarse graining enables the investigation of molecular dynamics for larger systems and at longer timescales than is possible at atomic resolution. However, a coarse graining model must be formulated such that the conclusions we draw from it are consi stent with the conclusions we would draw from a model at a finer level of detail. It has been proven that a force matching scheme defines a thermodynamically consistent coarse-grained model for an atomistic system in the variational limit. Wang et al. [ACS Cent. Sci. 5, 755 (2019)] demonstrated that the existence of such a variational limit enables the use of a supervised machine learning framework to generate a coarse-grained force field, which can then be used for simulation in the coarse-grained space. Their framework, however, requires the manual input of molecular features upon which to machine learn the force field. In the present contribution, we build upon the advance of Wang et al.and introduce a hybrid architecture for the machine learning of coarse-grained force fields that learns their own features via a subnetwork that leverages continuous filter convolutions on a graph neural network architecture. We demonstrate that this framework succeeds at reproducing the thermodynamics for small biomolecular systems. Since the learned molecular representations are inherently transferable, the architecture presented here sets the stage for the development of machine-learned, coarse-grained force fields that are transferable across molecular systems.
80 - Suchuan Dong , Naxian Ni 2020
We present a simple and effective method for representing periodic functions and enforcing exactly the periodic boundary conditions for solving differential equations with deep neural networks (DNN). The method stems from some simple properties about function compositions involving periodic functions. It essentially composes a DNN-represented arbitrary function with a set of independent periodic functions with adjustable (training) parameters. We distinguish two types of periodic conditions: those imposing the periodicity requirement on the function and all its derivatives (to infinite order), and those imposing periodicity on the function and its derivatives up to a finite order $k$ ($kgeqslant 0$). The former will be referred to as $C^{infty}$ periodic conditions, and the latter $C^{k}$ periodic conditions. We define operations that constitute a $C^{infty}$ periodic layer and a $C^k$ periodic layer (for any $kgeqslant 0$). A deep neural network with a $C^{infty}$ (or $C^k$) periodic layer incorporated as the second layer automatically and exactly satisfies the $C^{infty}$ (or $C^k$) periodic conditions. We present extensive numerical experiments on ordinary and partial differential equations with $C^{infty}$ and $C^k$ periodic boundary conditions to verify and demonstrate that the proposed method indeed enforces exactly, to the machine accuracy, the periodicity for the DNN solution and its derivatives.
Deep quantum neural networks may provide a promising way to achieve quantum learning advantage with noisy intermediate scale quantum devices. Here, we use deep quantum feedforward neural networks capable of universal quantum computation to represent the mixed states for open quantum many-body systems and introduce a variational method with quantum derivatives to solve the master equation for dynamics and stationary states. Owning to the special structure of the quantum networks, this approach enjoys a number of notable features, including the absence of barren plateaus, efficient quantum analogue of the backpropagation algorithm, resource-saving reuse of hidden qubits, general applicability independent of dimensionality and entanglement properties, as well as the convenient implementation of symmetries. As proof-of-principle demonstrations, we apply this approach to both one-dimensional transverse field Ising and two-dimensional $J_1-J_2$ models with dissipation, and show that it can efficiently capture their dynamics and stationary states with a desired accuracy.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا