No Arabic abstract
We establish a series of deep convolutional neural networks to automatically analyze position averaged convergent beam electron diffraction patterns. The networks first calibrate the zero-order disk size, center position, and rotation without the need for pretreating the data. With the aligned data, additional networks then measure the sample thickness and tilt. The performance of the network is explored as a function of a variety of variables including thickness, tilt, and dose. A methodology to explore the response of the neural network to various pattern features is also presented. Processing patterns at a rate of $sim$0.1 s/pattern, the network is shown to be orders of magnitude faster than a brute force method while maintaining accuracy. The approach is thus suitable for automatically processing big, 4D STEM data. We also discuss the generality of the method to other materials/orientations as well as a hybrid approach that combines the features of the neural network with least squares fitting for even more robust analysis. The source code is available at https://github.com/subangstrom/DeepDiffraction.
Convergent beam electron diffraction is routinely applied for studying deformation and local strain in thick crystals by matching the crystal structure to the observed intensity distributions. Recently, it has been demonstrated that CBED can be applied for imaging two-dimensional (2D) crystals where a direct reconstruction is possible and three-dimensional crystal deformations at a nanometre resolution can be retrieved. Here, we demonstrate that second-order effects allow for further information to be obtained regarding stacking arrangements between the crystals. Such effects are especially pronounced in samples consisting of multiple layers of 2D crystals. We show, using simulations and experiments, that twisted multilayer samples exhibit extra modulations of interference fringes in CBED patterns, i. e., a CBED moire. A simple and robust method for the evaluation of the composition and the number of layers from a single-shot CBED pattern is demonstrated.
X-ray diffraction (XRD) data acquisition and analysis is among the most time-consuming steps in the development cycle of novel thin-film materials. We propose a machine-learning-enabled approach to predict crystallographic dimensionality and space group from a limited number of thin-film XRD patterns. We overcome the scarce-data problem intrinsic to novel materials development by coupling a supervised machine learning approach with a model agnostic, physics-informed data augmentation strategy using simulated data from the Inorganic Crystal Structure Database (ICSD) and experimental data. As a test case, 115 thin-film metal halides spanning 3 dimensionalities and 7 space-groups are synthesized and classified. After testing various algorithms, we develop and implement an all convolutional neural network, with cross validated accuracies for dimensionality and space-group classification of 93% and 89%, respectively. We propose average class activation maps, computed from a global average pooling layer, to allow high model interpretability by human experimentalists, elucidating the root causes of misclassification. Finally, we systematically evaluate the maximum XRD pattern step size (data acquisition rate) before loss of predictive accuracy occurs, and determine it to be 0.16{deg}, which enables an XRD pattern to be obtained and classified in 5.5 minutes or less.
The convergent beam electron diffraction (CBED) patterns of twisted bilayer samples exhibit interference patterns in their CBED spots. Such interference patterns can be treated as off-axis holograms and the phase of the scattered waves, meaning the interlayer distance can be reconstructed. A detailed protocol of the reconstruction procedure is provided in this study. In addition, we derive an exact formula for reconstructing the interlayer distance from the recovered phase distribution, which takes into account the different chemical compositions of the individual monolayers. It is shown that one interference fringe in a CBED spot is sufficient to reconstruct the distance between the layers, which can be practical for imaging samples with a relatively small twist angle or when probing small sample regions. The quality of the reconstructed interlayer distance is studied as a function of the twist angle. At smaller twist angles, the reconstructed interlayer distance distribution is more precise and artefact free. At larger twist angles, artefacts due to the moire structure appear in the reconstruction. A method for the reconstruction of the average interlayer distance is presented. As for resolution, the interlayer distance can be reconstructed by the holographic approach at an accuracy of 0.5 A, which is a few hundred times better than the intrinsic z-resolution of diffraction limited resolution, as expressed through the spread of the measured k-values. Moreover, we show that holographic CBED imaging can detect variations as small as 0.1 A in the interlayer distance, though the quantitative reconstruction of such variations suffers from large errors.
Two dimensional (2D) peak finding is a common practice in data analysis for physics experiments, which is typically achieved by computing the local derivatives. However, this method is inherently unstable when the local landscape is complicated, or the signal-to-noise ratio of the data is low. In this work, we propose a new method in which the peak tracking task is formalized as an inverse problem, thus can be solved with a convolutional neural network (CNN). In addition, we show that the underlying physics principle of the experiments can be used to generate the training data. By generalizing the trained neural network on real experimental data, we show that the CNN method can achieve comparable or better results than traditional derivative based methods. This approach can be further generalized in different physics experiments when the physical process is known.
Self mixing interferometry is a well established interferometric measurement technique. In spite of the robustness and simplicity of the concept, interpreting the self-mixing signal is often complicated in practice, which is detrimental to measurement availability. Here we discuss the use of a convolutional neural network to reconstruct the displacement of a target from the self mixing signal in a semiconductor laser. The network, once trained on periodic displacement patterns, can reconstruct arbitrarily complex displacement in different alignment conditions and setups. The approach validated here is amenable to generalization to modulated schemes or even to totally different self mixing sensing tasks.