Do you want to publish a course? Click here

Harnessing Interpretable and Unsupervised Machine Learning to Address Big Data from Modern X-ray Diffraction

56   0   0.0 ( 0 )
 Publication date 2020
  fields Physics
and research's language is English




Ask ChatGPT about the research

The information content of crystalline materials becomes astronomical when collective electronic behavior and their fluctuations are taken into account. In the past decade, improvements in source brightness and detector technology at modern x-ray facilities have allowed a dramatically increased fraction of this information to be captured. Now, the primary challenge is to understand and discover scientific principles from big data sets when a comprehensive analysis is beyond human reach. We report the development of a novel unsupervised machine learning approach, XRD Temperature Clustering (X-TEC), that can automatically extract charge density wave (CDW) order parameters and detect intra-unit cell (IUC) ordering and its fluctuations from a series of high-volume X-ray diffraction (XRD) measurements taken at multiple temperatures. We apply X-TEC to XRD data on a quasi-skutterudite family of materials, (Ca$_x$Sr$_{1-x}$)$_3$Rh$_4$Sn$_{13}$, where a quantum critical point arising from charge order is observed as a function of Ca concentration. We further apply X-TEC to XRD data on the pyrochlore metal, Cd$_2$Re$_2$O$_7$, to investigate its two much debated structural phase transitions and uncover the Goldstone mode accompanying them. We demonstrate how unprecedented atomic scale knowledge can be gained when human researchers connect the X-TEC results to physical principles. Specifically, we extract from the X-TEC-revealed selection rule that the Cd and Re displacements are approximately equal in amplitude, but out of phase. This discovery reveals a previously unknown involvement of $5d^2$ Re, supporting the idea of an electronic origin to the structural order. Our approach can radically transform XRD experiments by allowing in-operando data analysis and enabling researchers to refine experiments by discovering interesting regions of phase space on-the-fly.



rate research

Read More

Lattice Monte Carlo calculations of interacting systems on non-bipartite lattices exhibit an oscillatory imaginary phase known as the phase or sign problem, even at zero chemical potential. One method to alleviate the sign problem is to analytically continue the integration region of the state variables into the complex plane via holomorphic flow equations. For asymptotically large flow times the state variables approach manifolds of constant imaginary phase known as Lefschetz thimbles. However, flowing such variables and calculating the ensuing Jacobian is a computationally demanding procedure. In this paper we demonstrate that neural networks can be trained to parameterize suitable manifolds for this class of sign problem and drastically reduce the computational cost. We apply our method to the Hubbard model on the triangle and tetrahedron, both of which are non-bipartite. At strong interaction strengths and modest temperatures the tetrahedron suffers from a severe sign problem that cannot be overcome with standard reweighting techniques, while it quickly yields to our method. We benchmark our results with exact calculations and comment on future directions of this work.
Machine learning (ML) techniques applied to quantum many-body physics have emerged as a new research field. While the numerical power of this approach is undeniable, the most expressive ML algorithms, such as neural networks, are black boxes: The user does neither know the logic behind the model predictions nor the uncertainty of the model predictions. In this work, we present a toolbox for interpretability and reliability, agnostic of the model architecture. In particular, it provides a notion of the influence of the input data on the prediction at a given test point, an estimation of the uncertainty of the model predictions, and an extrapolation score for the model predictions. Such a toolbox only requires a single computation of the Hessian of the training loss function. Our work opens the road to the systematic use of interpretability and reliability methods in ML applied to physics and, more generally, science.
Complex behavior poses challenges in extracting models from experiment. An example is spin liquid formation in frustrated magnets like Dy$_2$Ti$_2$O$_7$. Understanding has been hindered by issues including disorder, glass formation, and interpretation of scattering data. Here, we use a novel automated capability to extract model Hamiltonians from data, and to identify different magnetic regimes. This involves training an autoencoder to learn a compressed representation of three-dimensional diffuse scattering, over a wide range of spin Hamiltonians. The autoencoder finds optimal matches according to scattering and heat capacity data and provides confidence intervals. Validation tests indicate that our optimal Hamiltonian accurately predicts temperature and field dependence of both magnetic structure and magnetization, as well as glass formation and irreversibility in Dy$_2$Ti$_2$O$_7$. The autoencoder can also categorize different magnetic behaviors and eliminate background noise and artifacts in raw data. Our methodology is readily applicable to other materials and types of scattering problems.
Machine learning models are a powerful theoretical tool for analyzing data from quantum simulators, in which results of experiments are sets of snapshots of many-body states. Recently, they have been successfully applied to distinguish between snapshots that can not be identified using traditional one and two point correlation functions. Thus far, the complexity of these models has inhibited new physical insights from this approach. Here, using a novel set of nonlinearities we develop a network architecture that discovers features in the data which are directly interpretable in terms of physical observables. In particular, our network can be understood as uncovering high-order correlators which significantly differ between the data studied. We demonstrate this new architecture on sets of simulated snapshots produced by two candidate theories approximating the doped Fermi-Hubbard model, which is realized in state-of-the art quantum gas microscopy experiments. From the trained networks, we uncover that the key distinguishing features are fourth-order spin-charge correlators, providing a means to compare experimental data to theoretical predictions. Our approach lends itself well to the construction of simple, end-to-end interpretable architectures and is applicable to arbitrary lattice data, thus paving the way for new physical insights from machine learning studies of experimental as well as numerical data.
This paper reviews some of the challenges posed by the huge growth of experimental data generated by the new generation of large-scale experiments at UK national facilities at the Rutherford Appleton Laboratory site at Harwell near Oxford. Such Big Scientific Data comes from the Diamond Light Source and Electron Microscopy Facilities, the ISIS Neutron and Muon Facility, and the UKs Central Laser Facility. Increasingly, scientists are now needing to use advanced machine learning and other AI technologies both to automate parts of the data pipeline and also to help find new scientific discoveries in the analysis of their data. For commercially important applications, such as object recognition, natural language processing and automatic translation, deep learning has made dramatic breakthroughs. Googles DeepMind has now also used deep learning technology to develop their AlphaFold tool to make predictions for protein folding. Remarkably, they have been able to achieve some spectacular results for this specific scientific problem. Can deep learning be similarly transformative for other scientific problems? After a brief review of some initial applications of machine learning at the Rutherford Appleton Laboratory, we focus on challenges and opportunities for AI in advancing materials science. Finally, we discuss the importance of developing some realistic machine learning benchmarks using Big Scientific Data coming from a number of different scientific domains. We conclude with some initial examples of our SciML benchmark suite and of the research challenges these benchmarks will enable.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا