No Arabic abstract
Self mixing interferometry is a well established interferometric measurement technique. In spite of the robustness and simplicity of the concept, interpreting the self-mixing signal is often complicated in practice, which is detrimental to measurement availability. Here we discuss the use of a convolutional neural network to reconstruct the displacement of a target from the self mixing signal in a semiconductor laser. The network, once trained on periodic displacement patterns, can reconstruct arbitrarily complex displacement in different alignment conditions and setups. The approach validated here is amenable to generalization to modulated schemes or even to totally different self mixing sensing tasks.
Two dimensional (2D) peak finding is a common practice in data analysis for physics experiments, which is typically achieved by computing the local derivatives. However, this method is inherently unstable when the local landscape is complicated, or the signal-to-noise ratio of the data is low. In this work, we propose a new method in which the peak tracking task is formalized as an inverse problem, thus can be solved with a convolutional neural network (CNN). In addition, we show that the underlying physics principle of the experiments can be used to generate the training data. By generalizing the trained neural network on real experimental data, we show that the CNN method can achieve comparable or better results than traditional derivative based methods. This approach can be further generalized in different physics experiments when the physical process is known.
We establish a series of deep convolutional neural networks to automatically analyze position averaged convergent beam electron diffraction patterns. The networks first calibrate the zero-order disk size, center position, and rotation without the need for pretreating the data. With the aligned data, additional networks then measure the sample thickness and tilt. The performance of the network is explored as a function of a variety of variables including thickness, tilt, and dose. A methodology to explore the response of the neural network to various pattern features is also presented. Processing patterns at a rate of $sim$0.1 s/pattern, the network is shown to be orders of magnitude faster than a brute force method while maintaining accuracy. The approach is thus suitable for automatically processing big, 4D STEM data. We also discuss the generality of the method to other materials/orientations as well as a hybrid approach that combines the features of the neural network with least squares fitting for even more robust analysis. The source code is available at https://github.com/subangstrom/DeepDiffraction.
Selection of the correct convergence angle is essential for achieving the highest resolution imaging in scanning transmission electron microscopy (STEM). Use of poor heuristics, such as Rayleighs quarter-phase rule, to assess probe quality and uncertainties in measurement of the aberration function result in incorrect selection of convergence angles and lower resolution. Here, we show that the Strehl ratio provides an accurate and efficient to calculate criteria for evaluating probe size for STEM. A convolutional neural network trained on the Strehl ratio is shown to outperform experienced microscopists at selecting a convergence angle from a single electron Ronchigram using simulated datasets. Generating tens of thousands of simulated Ronchigram examples, the network is trained to select convergence angles yielding probes on average 85% nearer to optimal size at millisecond speeds (0.02% human assessment time). Qualitative assessment on experimental Ronchigrams with intentionally introduced aberrations suggests that trends in the optimal convergence angle size are well modeled but high accuracy requires extensive training datasets. This near immediate assessment of Ronchigrams using the Strehl ratio and machine learning highlights a viable path toward rapid, automated alignment of aberration-corrected electron microscopes.
We explore the application of a Convolutional Neural Network (CNN) to image the shear modulus field of an almost incompressible, isotropic, linear elastic medium in plane strain using displacement or strain field data. This problem is important in medicine because the shear modulus of suspicious and potentially cancerous growths in soft tissue is elevated by about an order of magnitude as compared to the background of normal tissue. Imaging the shear modulus field therefore can lead to high-contrast medical images. Our imaging problem is: Given a displacement or strain field (or its components), predict the corresponding shear modulus field. Our CNN is trained using 6000 training examples consisting of a displacement or strain field and a corresponding shear modulus field. We observe encouraging results which warrant further research and show the promise of this methodology.
During a tokamak discharge, the plasma can vary between different confinement regimes: Low (L), High (H) and, in some cases, a temporary (intermediate state), called Dithering (D). In addition, while the plasma is in H mode, Edge Localized Modes (ELMs) can occur. The automatic detection of changes between these states, and of ELMs, is important for tokamak operation. Motivated by this, and by recent developments in Deep Learning (DL), we developed and compared two methods for automatic detection of the occurrence of L-D-H transitions and ELMs, applied on data from the TCV tokamak. These methods consist in a Convolutional Neural Network (CNN) and a Convolutional Long Short Term Memory Neural Network (Conv-LSTM). We measured our results with regards to ELMs using ROC curves and Youdens score index, and regarding state detection using Cohens Kappa Index.