No Arabic abstract
Quantitative phase imaging (QPI) has been widely applied in characterizing cells and tissues. Spatial light interference microscopy (SLIM) is a highly sensitive QPI method, due to its partially coherent illumination and common path interferometry geometry. However, its acquisition rate is limited because of the four-frame phase-shifting scheme. On the other hand, off-axis methods like diffraction phase microscopy (DPM), allows for single-shot QPI. However, the laser-based DPM system is plagued by spatial noise due to speckles and multiple reflections. In a parallel development, deep learning was proven valuable in the field of bioimaging, especially due to its ability to translate one form of contrast into another. Here, we propose using deep learning to produce synthetic, SLIM-quality, high-sensitivity phase maps from DPM, single-shot images as input. We used an inverted microscope with its two ports connected to the DPM and SLIM modules, such that we have access to the two types of images on the same field of view. We constructed a deep learning model based on U-net and trained on over 1,000 pairs of DPM and SLIM images. The model learned to remove the speckles in laser DPM and overcame the background phase noise in both the test set and new data. Furthermore, we implemented the neural network inference into the live acquisition software, which now allows a DPM user to observe in real-time an extremely low-noise phase image. We demonstrated this principle of computational interference microscopy (CIM) imaging using blood smears, as they contain both erythrocytes and leukocytes, in static and dynamic conditions.
Localization microscopy is an imaging technique in which the positions of individual nanoscale point emitters (e.g. fluorescent molecules) are determined at high precision from their images. This is the key ingredient in single/multiple-particle-tracking and several super-resolution microscopy approaches. Localization in three-dimensions (3D) can be performed by modifying the image that a point-source creates on the camera, namely, the point-spread function (PSF). The PSF is engineered using additional optical elements to vary distinctively with the depth of the point-source. However, localizing multiple adjacent emitters in 3D poses a significant algorithmic challenge, due to the lateral overlap of their PSFs. Here, we train a neural network to receive an image containing densely overlapping PSFs of multiple emitters over a large axial range and output a list of their 3D positions. Furthermore, we then use the network to design the optimal PSF for the multi-emitter case. We demonstrate our approach numerically as well as experimentally by 3D STORM imaging of mitochondria, and volumetric imaging of dozens of fluorescently-labeled telomeres occupying a mammalian nucleus in a single snapshot.
Video microscopy has a long history of providing insights and breakthroughs for a broad range of disciplines, from physics to biology. Image analysis to extract quantitative information from video microscopy data has traditionally relied on algorithmic approaches, which are often difficult to implement, time consuming, and computationally expensive. Recently, alternative data-driven approaches using deep learning have greatly improved quantitative digital microscopy, potentially offering automatized, accurate, and fast image analysis. However, the combination of deep learning and video microscopy remains underutilized primarily due to the steep learning curve involved in developing custom deep-learning solutions. To overcome this issue, we introduce a software, DeepTrack 2.0, to design, train and validate deep-learning solutions for digital microscopy. We use it to exemplify how deep learning can be employed for a broad range of applications, from particle localization, tracking and characterization to cell counting and classification. Thanks to its user-friendly graphical interface, DeepTrack 2.0 can be easily customized for user-specific applications, and, thanks to its open-source object-oriented programming, it can be easily expanded to add features and functionalities, potentially introducing deep-learning-enhanced video microscopy to a far wider audience.
Deep neural networks have emerged as effective tools for computational imaging including quantitative phase microscopy of transparent samples. To reconstruct phase from intensity, current approaches rely on supervised learning with training examples; consequently, their performance is sensitive to a match of training and imaging settings. Here we propose a new approach to phase microscopy by using an untrained deep neural network for measurement formation, encapsulating the image prior and imaging physics. Our approach does not require any training data and simultaneously reconstructs the sought phase and pupil-plane aberrations by fitting the weights of the network to the captured images. To demonstrate experimentally, we reconstruct quantitative phase from through-focus images blindly (i.e. no explicit knowledge of the aberrations).
Cryogenic electron microscopy (cryo-EM) provides images from different copies of the same biomolecule in arbitrary orientations. Here, we present an end-to-end unsupervised approach that learns individual particle orientations from cryo-EM data while reconstructing the average 3D map of the biomolecule, starting from a random initialization. The approach relies on an auto-encoder architecture where the latent space is explicitly interpreted as orientations used by the decoder to form an image according to the linear projection model. We evaluate our method on simulated data and show that it is able to reconstruct 3D particle maps from noisy- and CTF-corrupted 2D projection images of unknown particle orientations.
Computational ghost imaging (CGI) is a single-pixel imaging technique that exploits the correlation between known random patterns and the measured intensity of light transmitted (or reflected) by an object. Although CGI can obtain two- or three- dimensional images with a single or a few bucket detectors, the quality of the reconstructed images is reduced by noise due to the reconstruction of images from random patterns. In this study, we improve the quality of CGI images using deep learning. A deep neural network is used to automatically learn the features of noise-contaminated CGI images. After training, the network is able to predict low-noise images from new noise-contaminated CGI images.