ترغب بنشر مسار تعليمي؟ اضغط هنا

AtomAI: A Deep Learning Framework for Analysis of Image and Spectroscopy Data in (Scanning) Transmission Electron Microscopy and Beyond

337   0   0.0 ( 0 )
 نشر من قبل Maxim Ziatdinov
 تاريخ النشر 2021
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

AtomAI is an open-source software package bridging instrument-specific Python libraries, deep learning, and simulation tools into a single ecosystem. AtomAI allows direct applications of the deep convolutional neural networks for atomic and mesoscopic image segmentation converting image and spectroscopy data into class-based local descriptors for downstream tasks such as statistical and graph analysis. For atomically-resolved imaging data, the output is types and positions of atomic species, with an option for subsequent refinement. AtomAI further allows the implementation of a broad range of image and spectrum analysis functions, including invariant variational autoencoders (VAEs). The latter consists of VAEs with rotational and (optionally) translational invariance for unsupervised and class-conditioned disentanglement of categorical and continuous data representations. In addition, AtomAI provides utilities for mapping structure-property relationships via im2spec and spec2im type of encoder-decoder models. Finally, AtomAI allows seamless connection to the first principles modeling with a Python interface, including molecular dynamics and density functional theory calculations on the inferred atomic position. While the majority of applications to date were based on atomically resolved electron microscopy, the flexibility of AtomAI allows straightforward extension towards the analysis of mesoscopic imaging data once the labels and feature identification workflows are established/available. The source code and example notebooks are available at https://github.com/pycroscopy/atomai.

قيم البحث

اقرأ أيضاً

Dynamics of protein self-assembly on the inorganic surface and the resultant geometric patterns are visualized using high-speed atomic force microscopy. The time dynamics of the classical macroscopic descriptors such as 2D Fast Fourier Transforms (FF T), correlation and pair distribution function are explored using the unsupervised linear unmixing, demonstrating the presence of static ordered and dynamic disordered phases and establishing their time dynamics. The deep learning (DL)-based workflow is developed to analyze detailed particle dynamics on the particle-by-particle level. Beyond the macroscopic descriptors, we utilize the knowledge of local particle geometries and configurations to explore the evolution of local geometries and reconstruct the interaction potential between the particles. Finally, we use the machine learning-based feature extraction to define particle neighborhood free of physics constraints. This approach allowed separating the possible classes of particle behavior, identify the associated transition probabilities, and further extend this analysis to identify slow modes and associated configurations, allowing for systematic exploration and predictive modeling of the time dynamics of the system. Overall, this work establishes the DL based workflow for the analysis of the self-organization processes in complex systems from observational data and provides insight into the fundamental mechanisms.
Scanning transmission electron microscopy (STEM) is now the primary tool for exploring functional materials on the atomic level. Often, features of interest are highly localized in specific regions in the material, such as ferroelectric domain walls, extended defects, or second phase inclusions. Selecting regions to image for structural and chemical discovery via atomically resolved imaging has traditionally proceeded via human operators making semi-informed judgements on sampling locations and parameters. Recent efforts at automation for structural and physical discovery have pointed towards the use of active learning methods that utilize Bayesian optimization with surrogate models to quickly find relevant regions of interest. Yet despite the potential importance of this direction, there is a general lack of certainty in selecting relevant control algorithms and how to balance a priori knowledge of the material system with knowledge derived during experimentation. Here we address this gap by developing the automated experiment workflows with several combinations to both illustrate the effects of these choices and demonstrate the tradeoffs associated with each in terms of accuracy, robustness, and susceptibility to hyperparameters for structural discovery. We discuss possible methods to build descriptors using the raw image data and deep learning based semantic segmentation, as well as the implementation of variational autoencoder based representation. Furthermore, each workflow is applied to a range of feature sizes including NiO pillars within a La:SrMnO$_3$ matrix, ferroelectric domains in BiFeO$_3$, and topological defects in graphene. The code developed in this manuscript are open sourced and will be released at github.com/creangnc/AE_Workflows.
Physics-driven discovery in an autonomous experiment has emerged as a dream application of machine learning in physical sciences. Here we develop and experimentally implement deep kernel learning workflow combining the correlative prediction of the t arget functional response and its uncertainty from the structure, and physics-based selection of acquisition function guiding the navigation of the image space. Compared to classical Bayesian optimization methods, this approach allows to capture the complex spatial features present in the images of realistic materials, and dynamically learn structure-property relationships towards physical discovery. Here, this approach is illustrated for nanoplasmonic studies of the nanoparticles and experimentally implemented for bulk- and edge plasmon discovery in MnPS3, a lesser-known beam-sensitive layered 2D material. This approach is universal and is expected to be applicable to probe-based microscopic techniques including other STEM modalities and Scanning Probe Microscopies.
Recent advances in (scanning) transmission electron microscopy have enabled routine generation of large volumes of high-veracity structural data on 2D and 3D materials, naturally offering the challenge of using these as starting inputs for atomistic simulations. In this fashion, theory will address experimentally emerging structures, as opposed to the full range of theoretically possible atomic configurations. However, this challenge is highly non-trivial due to the extreme disparity between intrinsic time scales accessible to modern simulations and microscopy, as well as latencies of microscopy and simulations per se. Addressing this issue requires as a first step bridging the instrumental data flow and physics-based simulation environment, to enable the selection of regions of interest and exploring them using physical simulations. Here we report the development of the machine learning workflow that directly bridges the instrument data stream into Python-based molecular dynamics and density functional theory environments using pre-trained neural networks to convert imaging data to physical descriptors. The pathways to ensure the structural stability and compensate for the observational biases universally present in the data are identified in the workflow. This approach is used for a graphene system to reconstruct optimized geometry and simulate temperature-dependent dynamics including adsorption of Cr as an ad-atom and graphene healing effects. However, it is universal and can be used for other material systems.
ROOT is an object-oriented C++ framework conceived in the high-energy physics (HEP) community, designed for storing and analyzing petabytes of data in an efficient way. Any instance of a C++ class can be stored into a ROOT file in a machine-independe nt compressed binary format. In ROOT the TTree object container is optimized for statistical data analysis over very large data sets by using vertical data storage techniques. These containers can span a large number of files on local disks, the web, or a number of different shared file systems. In order to analyze this data, the user can chose out of a wide set of mathematical and statistical functions, including linear algebra classes, numerical algorithms such as integration and minimization, and various methods for performing regression analysis (fitting). In particular, ROOT offers packages for complex data modeling and fitting, as well as multivariate classification based on machine learning techniques. A central piece in these analysis tools are the histogram classes which provide binning of one- and multi-dimensional data. Results can be saved in high-quality graphical formats like Postscript and PDF or in bitmap formats like JPG or GIF. The result can also be stored into ROOT macros that allow a full recreation and rework of the graphics. Users typically create their analysis macros step by step, making use of the interactive C++ interpreter CINT, while running over small data samples. Once the development is finished, they can run these macros at full compiled speed over large data sets, using on-the-fly compilation, or by creating a stand-alone batch program. Finally, if processing farms are available, the user can reduce the execution time of intrinsically parallel tasks - e.g. data mining in HEP - by using PROOF, which will take care of optimally distributing the work over the available resources in a transparent way.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا