Do you want to publish a course? Click here

Predicting ground state configuration of energy landscape ensemble using graph neural network

385   0   0.0 ( 0 )
 Added by Seong Ho Pahng
 Publication date 2020
  fields Physics
and research's language is English




Ask ChatGPT about the research

Many scientific problems seek to find the ground state in a rugged energy landscape, a task that becomes prohibitively difficult for large systems. Within a particular class of problems, however, the short-range correlations within energy minima might be independent of system size. Can these correlations be inferred from small problems with known ground states to accelerate the search for the ground states of larger problems? Here, we demonstrate the strategy on Ising spin glasses, where the interaction matrices are drawn from protein contact maps. We use graph neural network to learn the mapping from an interaction matrix $J$ to a ground state configuration, yielding guesses for the set of most probable configurations. Given these guesses, we show that ground state configurations can be searched much faster than in vanilla simulated annealing. For large problems, a model trained on small $J$ matrices predicts a configurations whose energy is much lower than those obtained by simulated annealing, indicating the size generalizability of the strategy.



rate research

Read More

Strongly interacting quantum systems described by non-stoquastic Hamiltonians exhibit rich low-temperature physics. Yet, their study poses a formidable challenge, even for state-of-the-art numerical techniques. Here, we investigate systematically the performance of a class of universal variational wave-functions based on artificial neural networks, by considering the frustrated spin-$1/2$ $J_1-J_2$ Heisenberg model on the square lattice. Focusing on neural network architectures without physics-informed input, we argue in favor of using an ansatz consisting of two decoupled real-valued networks, one for the amplitude and the other for the phase of the variational wavefunction. By introducing concrete mitigation strategies against inherent numerical instabilities in the stochastic reconfiguration algorithm we obtain a variational energy comparable to that reported recently with neural networks that incorporate knowledge about the physical system. Through a detailed analysis of the individual components of the algorithm, we conclude that the rugged nature of the energy landscape constitutes the major obstacle in finding a satisfactory approximation to the ground state wavefunction, and prevents learning the correct sign structure. In particular, we show that in the present setup the neural network expressivity and Monte Carlo sampling are not primary limiting factors.
146 - Remmy Zen , Long My , Ryan Tan 2019
Neural-network quantum states have shown great potential for the study of many-body quantum systems. In statistical machine learning, transfer learning designates protocols reusing features of a machine learning model trained for a problem to solve a possibly related but different problem. We propose to evaluate the potential of transfer learning to improve the scalability of neural-network quantum states. We devise and present physics-inspired transfer learning protocols, reusing the features of neural-network quantum states learned for the computation of the ground state of a small system for systems of larger sizes. We implement different protocols for restricted Boltzmann machines on general-purpose graphics processing units. This implementation alone yields a speedup over existing implementations on multi-core and distributed central processing units in comparable settings. We empirically and comparatively evaluate the efficiency (time) and effectiveness (accuracy) of different transfer learning protocols as we scale the system size in different models and different quantum phases. Namely, we consider both the transverse field Ising and Heisenberg XXZ models in one dimension, and also in two dimensions for the latter, with system sizes up to 128 and 8 x 8 spins. We empirically demonstrate that some of the transfer learning protocols that we have devised can be far more effective and efficient than starting from neural-network quantum states with randomly initialized parameters.
493 - Zun Wang , Chong Wang , Sibo Zhao 2021
Molecular dynamics is a powerful simulation tool to explore material properties. Most of the realistic material systems are too large to be simulated with first-principles molecular dynamics. Classical molecular dynamics has lower computational cost but requires accurate force fields to achieve chemical accuracy. In this work, we develop a symmetry-adapted graph neural networks framework, named molecular dynamics graph neural networks (MDGNN), to construct force fields automatically for molecular dynamics simulations for both molecules and crystals. This architecture consistently preserves the translation, rotation and permutation invariance in the simulations. We propose a new feature engineering method including higher order contributions and show that MDGNN accurately reproduces the results of both classical and first-principles molecular dynamics. We also demonstrate that force fields constructed by the model has good transferability. Therefore, MDGNN provides an efficient and promising option for molecular dynamics simulations of large scale systems with high accuracy.
175 - Remmy Zen , Long My , Ryan Tan 2020
Finding the precise location of quantum critical points is of particular importance to characterise quantum many-body systems at zero temperature. However, quantum many-body systems are notoriously hard to study because the dimension of their Hilbert space increases exponentially with their size. Recently, machine learning tools known as neural-network quantum states have been shown to effectively and efficiently simulate quantum many-body systems. We present an approach to finding the quantum critical points of the quantum Ising model using neural-network quantum states, analytically constructed innate restricted Boltzmann machines, transfer learning and unsupervised learning. We validate the approach and evaluate its efficiency and effectiveness in comparison with other traditional approaches.
We propose a minimal generalization of the celebrated Markov-Chain Monte Carlo algorithm which allows for an arbitrary number of configurations to be visited at every Monte Carlo step. This is advantageous when a parallel computing machine is available, or when many biased configurations can be evaluated at little additional computational cost. As an example of the former case, we report a significant reduction of the thermalization time for the paradigmatic Sherrington-Kirkpatrick spin-glass model. For the latter case, we show that, by leveraging on the exponential number of biased configurations automatically computed by Diagrammatic Monte Carlo, we can speed up computations in the Fermi-Hubbard model by two orders of magnitude.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا