No Arabic abstract
The Anderson Impurity Model (AIM) is a canonical model of quantum many-body physics. Here we investigate whether machine learning models, both neural networks (NN) and kernel ridge regression (KRR), can accurately predict the AIM spectral function in all of its regimes, from empty orbital, to mixed valence, to Kondo. To tackle this question, we construct two large spectral databases containing approximately 410k and 600k spectral functions of the single-channel impurity problem. We show that the NN models can accurately predict the AIM spectral function in all of its regimes, with point-wise mean absolute errors down to 0.003 in normalized units. We find that the trained NN models outperform models based on KRR and enjoy a speedup on the order of $10^5$ over traditional AIM solvers. The required size of the training set of our model can be significantly reduced using furthest point sampling in the AIM parameter space, which is important for generalizing our method to more complicated multi-channel impurity problems of relevance to predicting the properties of real materials.
Searching for superconducting hydrides has so far largely focused on finding materials exhibiting the highest possible critical temperatures ($T_c$). This has led to a bias towards materials stabilised at very high pressures, which introduces a number of technical difficulties in experiment. Here we apply machine learning methods in an effort to identify superconducting hydrides which can operate closer to ambient conditions. The output of these models informs structure searches, from which we identify and screen stable candidates before performing electron-phonon calculations to obtain $T_c$. Hydrides of alkali and alkaline earth metals are identified as particularly promising; a $T_c$ of up to 115 K is calculated for RbH$_{12}$ at 50 GPa and a $T_c$ of up to 90 K is calculated for CsH$_7$ at 100 GPa.
Using exact diagonalization and tensor network techniques we compute the gap for the AKLT Hamiltonian in 1D and 2D spatial dimensions. Tensor Network methods are used to extract physical properties directly in the thermodynamic limit, and we support these results using finite-size scalings from exact diagonalization. Studying the AKLT Hamiltonian perturbed by an external field, we show how to obtain an accurate value of the gap of the original AKLT Hamiltonian from the field value at which the ground state verifies e_0<0, which is a quantum critical point. With the Tensor Network Renormalization Group methods we provide evidence of a finite gap in the thermodynamic limit for the AKLT models in the 1D chain and 2D hexagonal and square lattices. This method can be applied generally to Hamiltonians with rotational symmetry, and we also show results beyond the AKLT model.
Predicting the outcome of a chemical reaction using efficient computational models can be used to develop high-throughput screening techniques. This can significantly reduce the number of experiments needed to be performed in a huge search space, which saves time, effort and expense. Recently, machine learning methods have been bolstering conventional structure-activity relationships used to advance understanding of chemical reactions. We have developed a model to predict the products of catalytic reactions on the surface of oxygen-covered and bare gold using machine learning. Using experimental data, we developed a machine learning model that maps reactants to products, using a chemical space representation. This involves predicting a chemical space value for the products, and then matching this value to a molecular structure chosen from a database. The database was developed by applying a set of possible reaction outcomes using known reaction mechanisms. Our machine learning approach complements chemical intuition in predicting the outcome of several types of chemical reactions. In some cases, machine learning makes correct predictions where chemical intuition fails. We achieve up to 93% prediction accuracy for a small data set of less than two hundred reactions.
Calculating the spectral function of two dimensional systems is arguably one of the most pressing challenges in modern computational condensed matter physics. While efficient techniques are available in lower dimensions, two dimensional systems present insurmountable hurdles, ranging from the sign problem in quantum Monte Carlo (MC), to the entanglement area law in tensor network based methods. We hereby present a variational approach based on a Chebyshev expansion of the spectral function and a neural network representation for the wave functions. The Chebyshev moments are obtained by recursively applying the Hamiltonian and projecting on the space of variational states using a modified natural gradient descent method. We compare this approach with a modified approximation of the spectral function which uses a Krylov subspace constructed from the Chebyshev wave-functions. We present results for the one-dimensional and two-dimensional Heisenberg model on the square lattice, and compare to those obtained by other methods in the literature.
We employ variational autoencoders to extract physical insight from a dataset of one-particle Anderson impurity model spectral functions. Autoencoders are trained to find a low-dimensional, latent space representation that faithfully characterizes each element of the training set, as measured by a reconstruction error. Variational autoencoders, a probabilistic generalization of standard autoencoders, further condition the learned latent space to promote highly interpretable features. In our study, we find that the learned latent space components strongly correlate with well known, but nontrivial, parameters that characterize emergent behaviors in the Anderson impurity model. In particular, one latent space component correlates with particle-hole asymmetry, while another is in near one-to-one correspondence with the Kondo temperature, a dynamically generated low-energy scale in the impurity model. With symbolic regression, we model this component as a function of bare physical input parameters and rediscover the non-perturbative formula for the Kondo temperature. The machine learning pipeline we develop opens opportunities to discover new domain knowledge in other physical systems.