No Arabic abstract
Deep learning (DL) is an emerging analysis tool across sciences and engineering. Encouraged by the successes of DL in revealing quantitative trends in massive imaging data, we applied this approach to nano-scale deeply sub-diffractional images of propagating polaritonic waves in complex materials. We developed a practical protocol for the rapid regression of images that quantifies the wavelength and the quality factor of polaritonic waves utilizing the convolutional neural network (CNN). Using simulated near-field images as training data, the CNN can be made to simultaneously extract polaritonic characteristics and materials parameters in a timescale that is at least three orders of magnitude faster than common fitting/processing procedures. The CNN-based analysis was validated by examining the experimental near-field images of charge-transfer plasmon polaritons at Graphene/{alpha}-RuCl3 interfaces. Our work provides a general framework for extracting quantitative information from images generated with a variety of scanning probe methods.
In machine learning (ML), it is in general challenging to provide a detailed explanation on how a trained model arrives at its prediction. Thus, usually we are left with a black-box, which from a scientific standpoint is not satisfactory. Even though numerous methods have been recently proposed to interpret ML models, somewhat surprisingly, interpretability in ML is far from being a consensual concept, with diverse and sometimes contrasting motivations for it. Reasonable candidate properties of interpretable models could be model transparency (i.e. how does the model work?) and post hoc explanations (i.e., what else can the model tell me?). Here, I review the current debate on ML interpretability and identify key challenges that are specific to ML applied to materials science.
Orchestrating parametric fitting of multicomponent spectra at scale is an essential yet underappreciated task in high-throughput quantification of materials and chemical composition. To automate the annotation process for spectroscopic and diffraction data collected in counts of hundreds to thousands, we present a systematic approach compatible with high-performance computing infrastructures using the MapReduce model and task-based parallelization. We implement the approach in software and demonstrate linear computational scaling with respect to spectral components using multidimensional experimental materials characterization datasets from photoemission spectroscopy and powder electron diffraction as benchmarks. Our approach enables efficient generation of high-quality data annotation and online spectral analysis and is applicable to a variety of analytical techniques in materials science and chemistry as a building block for closed-loop experimental systems.
In this paper, we present a critical overview of statistical fiber bundles models. We discuss relevant aspects, like assumptions and consequences stemming from models in the literature and propose new ones. This is accomplished by concentrating on both the physical and statistical aspects of a specific load-sharing example, the breakdown (BD) for circuits of capacitors and related dielectrics. For series and parallel/series circuits (series/parallel reliability systems) of ordinary capacitors, the load-sharing rules are derived from the electrical laws. This with the BD formalism is then used to obtain the BD distribution of the circuit. The BD distribution and Gibbs measure are given for a series circuit and the size effects are illustrated for simulations of series and parallel/series circuits. This is related to the finite weakest link adjustments for the BD distribution that arise in large series/parallel reliability load-sharing systems, such as dielectric BD, from their extreme value approximations. An elementary but in-depth discussion of the physical aspects of SiO$_2$ and HfO$_2$ dielectrics and cell models is given. This is used to study a load-sharing cell model for the BD of HfO$_2$ dielectrics and the BD formalism. The latter study is based on an analysis of Kim and Lee (2004)s data for such dielectrics. Here, several BD distributions are compared in the analysis and proportional hazard regression models are used to study the BD formalism. In addition, some areas of open research are discussed.
We propose a novel data-driven approach for analyzing synchrotron Laue X-ray microdiffraction scans based on machine learning algorithms. The basic architecture and major components of the method are formulated mathematically. We demonstrate it through typical examples including polycrystalline BaTiO$_3$, multiphase transforming alloys and finely twinned martensite. The computational pipeline is implemented for beamline 12.3.2 at the Advanced Light Source, Lawrence Berkeley National Lab. The conventional analytical pathway for X-ray diffraction scans is based on a slow pattern by pattern crystal indexing process. This work provides a new way for analyzing X-ray diffraction 2D patterns, independent of the indexing process, and motivates further studies of X-ray diffraction patterns from the machine learning prospective for the development of suitable feature extraction, clustering and labeling algorithms.
Machine learning was utilized to efficiently boost the development of soft magnetic materials. The design process includes building a database composed of published experimental results, applying machine learning methods on the database, identifying the trends of magnetic properties in soft magnetic materials, and accelerating the design of next-generation soft magnetic nanocrystalline materials through the use of numerical optimization. Machine learning regression models were trained to predict magnetic saturation ($B_S$), coercivity ($H_C$) and magnetostriction ($lambda$), with a stochastic optimization framework being used to further optimize the corresponding magnetic properties. To verify the feasibility of the machine learning model, several optimized soft magnetic materials -- specified in terms of compositions and thermomechanical treatments -- have been predicted and then prepared and tested, showing good agreement between predictions and experiments, proving the reliability of the designed model. Two rounds of optimization-testing iterations were conducted to search for better properties.