ترغب بنشر مسار تعليمي؟ اضغط هنا

Role of stochastic noise and generalization error in the time propagation of neural-network quantum states

89   0   0.0 ( 0 )
 نشر من قبل Damian Hofmann
 تاريخ النشر 2021
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Neural-network quantum states (NQS) have been shown to be a suitable variational ansatz to simulate out-of-equilibrium dynamics in two-dimensional systems using time-dependent variational Monte Carlo (t-VMC). In particular, stable and accurate time propagation over long time scales has been observed in the square-lattice Heisenberg model using the Restricted Boltzmann machine architecture. However, achieving similar performance in other systems has proven to be more challenging. In this article, we focus on the two-leg Heisenberg ladder driven out of equilibrium by a pulsed excitation as a benchmark system. We demonstrate that unmitigated noise is strongly amplified by the nonlinear equations of motion for the network parameters, which by itself is sufficient to cause numerical instabilities in the time-evolution. As a consequence, the achievable accuracy of the simulated dynamics is a result of the interplay between network expressiveness and regularization required to remedy these instabilities. Inspired by machine learning practice, we propose a validation-set based diagnostic tool to help determining the optimal regularization hyperparameters for t-VMC based propagation schemes. For our benchmark, we show that stable and accurate time propagation can be achieved in regimes of sufficiently regularized variational dynamics.

قيم البحث

اقرأ أيضاً

Neural networks have been used as variational wave functions for quantum many-particle problems. It has been shown that the correct sign structure is crucial to obtain the high accurate ground state energies. In this work, we propose a hybrid wave fu nction combining the convolutional neural network (CNN) and projected entangled pair states (PEPS), in which the sign structures are determined by the PEPS, and the amplitudes of the wave functions are provided by CNN. We benchmark the ansatz on the highly frustrated spin-1/2 $J_1$-$J_2$ model. We show that the achieved ground energies are competitive to state-of-the-art results.
Variational methods have proven to be excellent tools to approximate ground states of complex many body Hamiltonians. Generic tools like neural networks are extremely powerful, but their parameters are not necessarily physically motivated. Thus, an e fficient parametrization of the wave-function can become challenging. In this letter we introduce a neural-network based variational ansatz that retains the flexibility of these generic methods while allowing for a tunability with respect to the relevant correlations governing the physics of the system. We illustrate the success of this approach on topological, long-range correlated and frustrated models. Additionally, we introduce compatible variational optimization methods for exploration of low-lying excited states without symmetries that preserve the interpretability of the ansatz.
146 - Remmy Zen , Long My , Ryan Tan 2019
Neural-network quantum states have shown great potential for the study of many-body quantum systems. In statistical machine learning, transfer learning designates protocols reusing features of a machine learning model trained for a problem to solve a possibly related but different problem. We propose to evaluate the potential of transfer learning to improve the scalability of neural-network quantum states. We devise and present physics-inspired transfer learning protocols, reusing the features of neural-network quantum states learned for the computation of the ground state of a small system for systems of larger sizes. We implement different protocols for restricted Boltzmann machines on general-purpose graphics processing units. This implementation alone yields a speedup over existing implementations on multi-core and distributed central processing units in comparable settings. We empirically and comparatively evaluate the efficiency (time) and effectiveness (accuracy) of different transfer learning protocols as we scale the system size in different models and different quantum phases. Namely, we consider both the transverse field Ising and Heisenberg XXZ models in one dimension, and also in two dimensions for the latter, with system sizes up to 128 and 8 x 8 spins. We empirically demonstrate that some of the transfer learning protocols that we have devised can be far more effective and efficient than starting from neural-network quantum states with randomly initialized parameters.
We initiate the study of neural-network quantum state algorithms for analyzing continuous-variable lattice quantum systems in first quantization. A simple family of continuous-variable trial wavefunctons is introduced which naturally generalizes the restricted Boltzmann machine (RBM) wavefunction introduced for analyzing quantum spin systems. By virtue of its simplicity, the same variational Monte Carlo training algorithms that have been developed for ground state determination and time evolution of spin systems have natural analogues in the continuum. We offer a proof of principle demonstration in the context of ground state determination of a stoquastic quantum rotor Hamiltonian. Results are compared against those obtained from partial differential equation (PDE) based scalable eigensolvers. This study serves as a benchmark against which future investigation of continuous-variable neural quantum states can be compared, and points to the need to consider deep network architectures and more sophisticated training algorithms.
We consider a monolayer of graphene under uniaxial, tensile strain and simulate Bloch oscillations for different electric field orientations parallel to the plane of the monolayer using several values of the components of the uniform strain tensor, b ut keeping the Poisson ratio in the range of observable values. We analyze the trajectories of the charge carriers with different initial conditions using an artificial neural network, trained to classify the simulated signals according to the strain applied to the membrane. When the electric field is oriented either along the Zig-Zag or the Armchair edges, our approach successfully classifies the independent component of the uniform strain tensor with up to 90% of accuracy and an error of $pm1%$ in the predicted value. For an arbitrary orientation of the field, the classification is made over the strain tensor component and the Poisson ratio simultaneously, obtaining up to 97% of accuracy with an error that goes from $pm5%$ to $pm10%$ in the strain tensor component and an error from $pm12.5%$ to $pm25%$ in the Poisson ratio.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا