ﻻ يوجد ملخص باللغة العربية
Machine learning (ML) techniques applied to quantum many-body physics have emerged as a new research field. While the numerical power of this approach is undeniable, the most expressive ML algorithms, such as neural networks, are black boxes: The user does neither know the logic behind the model predictions nor the uncertainty of the model predictions. In this work, we present a toolbox for interpretability and reliability, agnostic of the model architecture. In particular, it provides a notion of the influence of the input data on the prediction at a given test point, an estimation of the uncertainty of the model predictions, and an extrapolation score for the model predictions. Such a toolbox only requires a single computation of the Hessian of the training loss function. Our work opens the road to the systematic use of interpretability and reliability methods in ML applied to physics and, more generally, science.
Tensor networks are efficient representations of high-dimensional tensors with widespread applications in quantum many-body physics. Recently, they have been adapted to the field of machine learning, giving rise to an emergent research frontier that
We present GalaxAI - a versatile machine learning toolbox for efficient and interpretable end-to-end analysis of spacecraft telemetry data. GalaxAI employs various machine learning algorithms for multivariate time series analyses, classification, reg
Neuroevolution, a field that draws inspiration from the evolution of brains in nature, harnesses evolutionary algorithms to construct artificial neural networks. It bears a number of intriguing capabilities that are typically inaccessible to gradient
The classification of big data usually requires a mapping onto new data clusters which can then be processed by machine learning algorithms by means of more efficient and feasible linear separators. Recently, Lloyd et al. have advanced the proposal t
We employ variational autoencoders to extract physical insight from a dataset of one-particle Anderson impurity model spectral functions. Autoencoders are trained to find a low-dimensional, latent space representation that faithfully characterizes ea