Do you want to publish a course? Click here

Interpretability of machine-learning models in physical sciences

88   0   0.0 ( 0 )
 Added by Luca Ghiringhelli
 Publication date 2021
  fields Physics
and research's language is English




Ask ChatGPT about the research

In machine learning (ML), it is in general challenging to provide a detailed explanation on how a trained model arrives at its prediction. Thus, usually we are left with a black-box, which from a scientific standpoint is not satisfactory. Even though numerous methods have been recently proposed to interpret ML models, somewhat surprisingly, interpretability in ML is far from being a consensual concept, with diverse and sometimes contrasting motivations for it. Reasonable candidate properties of interpretable models could be model transparency (i.e. how does the model work?) and post hoc explanations (i.e., what else can the model tell me?). Here, I review the current debate on ML interpretability and identify key challenges that are specific to ML applied to materials science.



rate research

Read More

Machine learning was utilized to efficiently boost the development of soft magnetic materials. The design process includes building a database composed of published experimental results, applying machine learning methods on the database, identifying the trends of magnetic properties in soft magnetic materials, and accelerating the design of next-generation soft magnetic nanocrystalline materials through the use of numerical optimization. Machine learning regression models were trained to predict magnetic saturation ($B_S$), coercivity ($H_C$) and magnetostriction ($lambda$), with a stochastic optimization framework being used to further optimize the corresponding magnetic properties. To verify the feasibility of the machine learning model, several optimized soft magnetic materials -- specified in terms of compositions and thermomechanical treatments -- have been predicted and then prepared and tested, showing good agreement between predictions and experiments, proving the reliability of the designed model. Two rounds of optimization-testing iterations were conducted to search for better properties.
We discuss two research projects in material science in which the results cannot be stated with an estimation of the error: a spectro- scopic ellipsometry study aimed at determining the orientation of DNA molecules on diamond and a scanning tunneling microscopy study of platinum-induced nanowires on germanium. To investigate the reliability of the results, we apply ideas from the philosophy of models in science. Even if the studies had reported an error value, the trustworthiness of the result would not depend on that value alone.
Deep learning (DL) is an emerging analysis tool across sciences and engineering. Encouraged by the successes of DL in revealing quantitative trends in massive imaging data, we applied this approach to nano-scale deeply sub-diffractional images of propagating polaritonic waves in complex materials. We developed a practical protocol for the rapid regression of images that quantifies the wavelength and the quality factor of polaritonic waves utilizing the convolutional neural network (CNN). Using simulated near-field images as training data, the CNN can be made to simultaneously extract polaritonic characteristics and materials parameters in a timescale that is at least three orders of magnitude faster than common fitting/processing procedures. The CNN-based analysis was validated by examining the experimental near-field images of charge-transfer plasmon polaritons at Graphene/{alpha}-RuCl3 interfaces. Our work provides a general framework for extracting quantitative information from images generated with a variety of scanning probe methods.
Progress in functional materials discovery has been accelerated by advances in high throughput materials synthesis and by the development of high-throughput computation. However, a complementary robust and high throughput structural characterization framework is still lacking. New methods and tools in the field of machine learning suggest that a highly automated high-throughput structural characterization framework based on atomic-level imaging can establish the crucial statistical link between structure and macroscopic properties. Here we develop a machine learning framework towards this goal. Our framework captures local structural features in images with Zernike polynomials, which is demonstrably noise-robust, flexible, and accurate. These features are then classified into readily interpretable structural motifs with a hierarchical active learning scheme powered by a novel unsupervised two-stage relaxed clustering scheme. We have successfully demonstrated the accuracy and efficiency of the proposed methodology by mapping a full spectrum of structural defects, including point defects, line defects, and planar defects in scanning transmission electron microscopy (STEM) images of various 2D materials, with greatly improved separability over existing methods. Our techniques can be easily and flexibly applied to other types of microscopy data with complex features, providing a solid foundation for automatic, multiscale feature analysis with high veracity.
183 - Enrico Camporeale 2019
The numerous recent breakthroughs in machine learning (ML) make imperative to carefully ponder how the scientific community can benefit from a technology that, although not necessarily new, is today living its golden age. This Grand Challenge review paper is focused on the present and future role of machine learning in space weather. The purpose is twofold. On one hand, we will discuss previous works that use ML for space weather forecasting, focusing in particular on the few areas that have seen most activity: the forecasting of geomagnetic indices, of relativistic electrons at geosynchronous orbits, of solar flares occurrence, of coronal mass ejection propagation time, and of solar wind speed. On the other hand, this paper serves as a gentle introduction to the field of machine learning tailored to the space weather community and as a pointer to a number of open challenges that we believe the community should undertake in the next decade. The recurring themes throughout the review are the need to shift our forecasting paradigm to a probabilistic approach focused on the reliable assessment of uncertainties, and the combination of physics-based and machine learning approaches, known as gray-box.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا