No Arabic abstract
Two dimensional (2D) peak finding is a common practice in data analysis for physics experiments, which is typically achieved by computing the local derivatives. However, this method is inherently unstable when the local landscape is complicated, or the signal-to-noise ratio of the data is low. In this work, we propose a new method in which the peak tracking task is formalized as an inverse problem, thus can be solved with a convolutional neural network (CNN). In addition, we show that the underlying physics principle of the experiments can be used to generate the training data. By generalizing the trained neural network on real experimental data, we show that the CNN method can achieve comparable or better results than traditional derivative based methods. This approach can be further generalized in different physics experiments when the physical process is known.
Self mixing interferometry is a well established interferometric measurement technique. In spite of the robustness and simplicity of the concept, interpreting the self-mixing signal is often complicated in practice, which is detrimental to measurement availability. Here we discuss the use of a convolutional neural network to reconstruct the displacement of a target from the self mixing signal in a semiconductor laser. The network, once trained on periodic displacement patterns, can reconstruct arbitrarily complex displacement in different alignment conditions and setups. The approach validated here is amenable to generalization to modulated schemes or even to totally different self mixing sensing tasks.
With the development of the super-resolution convolutional neural network (SRCNN), deep learning technique has been widely applied in the field of image super-resolution. Previous works mainly focus on optimizing the structure of SRCNN, which have been achieved well performance in speed and restoration quality for image super-resolution. However, most of these approaches only consider a specific scale image during the training process, while ignoring the relationship between different scales of images. Motivated by this concern, in this paper, we propose a cascaded convolution neural network for image super-resolution (CSRCNN), which includes three cascaded Fast SRCNNs and each Fast SRCNN can process a specific scale image. Images of different scales can be trained simultaneously and the learned network can make full use of the information resided in different scales of images. Extensive experiments show that our network can achieve well performance for image SR.
We establish a series of deep convolutional neural networks to automatically analyze position averaged convergent beam electron diffraction patterns. The networks first calibrate the zero-order disk size, center position, and rotation without the need for pretreating the data. With the aligned data, additional networks then measure the sample thickness and tilt. The performance of the network is explored as a function of a variety of variables including thickness, tilt, and dose. A methodology to explore the response of the neural network to various pattern features is also presented. Processing patterns at a rate of $sim$0.1 s/pattern, the network is shown to be orders of magnitude faster than a brute force method while maintaining accuracy. The approach is thus suitable for automatically processing big, 4D STEM data. We also discuss the generality of the method to other materials/orientations as well as a hybrid approach that combines the features of the neural network with least squares fitting for even more robust analysis. The source code is available at https://github.com/subangstrom/DeepDiffraction.
Computed Tomography (CT) imaging technique is widely used in geological exploration, medical diagnosis and other fields. In practice, however, the resolution of CT image is usually limited by scanning devices and great expense. Super resolution (SR) methods based on deep learning have achieved surprising performance in two-dimensional (2D) images. Unfortunately, there are few effective SR algorithms for three-dimensional (3D) images. In this paper, we proposed a novel network named as three-dimensional super resolution convolutional neural network (3DSRCNN) to realize voxel super resolution for CT images. To solve the practical problems in training process such as slow convergence of network training, insufficient memory, etc., we utilized adjustable learning rate, residual-learning, gradient clipping, momentum stochastic gradient descent (SGD) strategies to optimize training procedure. In addition, we have explored the empirical guidelines to set appropriate number of layers of network and how to use residual learning strategy. Additionally, previous learning-based algorithms need to separately train for different scale factors for reconstruction, yet our single model can complete the multi-scale SR. At last, our method has better performance in terms of PSNR, SSIM and efficiency compared with conventional methods.
Selection of the correct convergence angle is essential for achieving the highest resolution imaging in scanning transmission electron microscopy (STEM). Use of poor heuristics, such as Rayleighs quarter-phase rule, to assess probe quality and uncertainties in measurement of the aberration function result in incorrect selection of convergence angles and lower resolution. Here, we show that the Strehl ratio provides an accurate and efficient to calculate criteria for evaluating probe size for STEM. A convolutional neural network trained on the Strehl ratio is shown to outperform experienced microscopists at selecting a convergence angle from a single electron Ronchigram using simulated datasets. Generating tens of thousands of simulated Ronchigram examples, the network is trained to select convergence angles yielding probes on average 85% nearer to optimal size at millisecond speeds (0.02% human assessment time). Qualitative assessment on experimental Ronchigrams with intentionally introduced aberrations suggests that trends in the optimal convergence angle size are well modeled but high accuracy requires extensive training datasets. This near immediate assessment of Ronchigrams using the Strehl ratio and machine learning highlights a viable path toward rapid, automated alignment of aberration-corrected electron microscopes.