No Arabic abstract
Phase retrieval approaches based on DL provide a framework to obtain phase information from an intensity hologram or diffraction pattern in a robust manner and in real time. However, current DL architectures applied to the phase problem rely i) on paired datasets, i.e., they are only applicable when a satisfactory solution of the phase problem has been found, and ii) on the fact that most of them ignore the physics of the imaging process. Here, we present PhaseGAN, a new DL approach based on Generative Adversarial Networks, which allows the use of unpaired datasets and includes the physics of image formation. Performance of our approach is enhanced by including the image formation physics and provides phase reconstructions when conventional phase retrieval algorithms fail, such as ultra-fast experiments. Thus, PhaseGAN offers the opportunity to address the phase problem when no phase reconstructions are available, but good simulations of the object or data from other experiments are available, enabling us to obtain results not possible before.
While characterization of coherent wavefields is essential to laser, x-ray and electron imaging, sensors measure the squared magnitude of the field, rather than the field itself. Holography or phase retrieval must be used to characterize the field. The need for a reference severely restricts the utility of holography. Phase retrieval, in contrast, is theoretically consistent with sensors that directly measure coherent or partially coherent fields with no prior assumptions. Unfortunately, phase retrieval has not yet been successfully implemented for large-scale fields. Here we show that both holography and phase retrieval are capable of quantum-limited coherent signal estimation and we describe phase retrieval strategies that approach the quantum limit for >1 megapixel fields. These strategies rely on group testing using networks of interferometers, such as might be constructed using emerging integrated photonic, plasmonic and/or metamaterial devices. Phase-sensitive sensor planes using such devices could eliminate the need both for lenses and reference signals, creating a path to large aperture diffraction limited laser imaging.
The astounding success made by artificial intelligence (AI) in healthcare and other fields proves that AI can achieve human-like performance. However, success always comes with challenges. Deep learning algorithms are data-dependent and require large datasets for training. The lack of data in the medical imaging field creates a bottleneck for the application of deep learning to medical image analysis. Medical image acquisition, annotation, and analysis are costly, and their usage is constrained by ethical restrictions. They also require many resources, such as human expertise and funding. That makes it difficult for non-medical researchers to have access to useful and large medical data. Thus, as comprehensive as possible, this paper provides a collection of medical image datasets with their associated challenges for deep learning research. We have collected information of around three hundred datasets and challenges mainly reported between 2013 and 2020 and categorized them into four categories: head & neck, chest & abdomen, pathology & blood, and ``others. Our paper has three purposes: 1) to provide a most up to date and complete list that can be used as a universal reference to easily find the datasets for clinical image analysis, 2) to guide researchers on the methodology to test and evaluate their methods performance and robustness on relevant datasets, 3) to provide a ``route to relevant algorithms for the relevant medical topics, and challenge leaderboards.
In order to increase signal-to-noise ratio in measurement, most imaging detectors sacrifice resolution to increase pixel size in confined area. Although the pixel super-resolution technique (PSR) enables resolution enhancement in such as digital holographic imaging, it suffers from unsatisfied reconstruction quality. In this work, we report a high-fidelity plug-and-play optimization method for PSR phase retrieval, termed as PNP-PSR. It decomposes PSR reconstruction into independent sub-problems based on the generalized alternating projection framework. An alternating projection operator and an enhancing neural network are derived to tackle the measurement fidelity and statistical prior regularization, respectively. In this way, PNP-PSR incorporates the advantages of individual operators, achieving both high efficiency and noise robustness. We compare PNP-PSR with the existing PSR phase retrieval algorithms with a series of simulations and experiments, and PNP-PSR outperforms the existing algorithms with as much as 11dB on PSNR. The enhanced imaging fidelity enables one-order-of-magnitude higher cell counting precision.
Signal recovery from nonlinear measurements involves solving an iterative optimization problem. In this paper, we present a framework to optimize the sensing parameters to improve the quality of the signal recovered by the given iterative method. In particular, we learn illumination patterns to recover signals from coded diffraction patterns using a fixed-cost alternating minimization-based phase retrieval method. Coded diffraction phase retrieval is a physically realistic system in which the signal is first modulated by a sequence of codes before the sensor records its Fourier amplitude. We represent the phase retrieval method as an unrolled network with a fixed number of layers and minimize the recovery error by optimizing over the measurement parameters. Since the number of iterations/layers are fixed, the recovery incurs a fixed cost. We present extensive simulation results on a variety of datasets under different conditions and a comparison with existing methods. Our results demonstrate that the proposed method provides near-perfect reconstruction using patterns learned with a small number of training images. Our proposed method provides significant improvements over existing methods both in terms of accuracy and speed.
Deep neural networks have emerged as effective tools for computational imaging including quantitative phase microscopy of transparent samples. To reconstruct phase from intensity, current approaches rely on supervised learning with training examples; consequently, their performance is sensitive to a match of training and imaging settings. Here we propose a new approach to phase microscopy by using an untrained deep neural network for measurement formation, encapsulating the image prior and imaging physics. Our approach does not require any training data and simultaneously reconstructs the sought phase and pupil-plane aberrations by fitting the weights of the network to the captured images. To demonstrate experimentally, we reconstruct quantitative phase from through-focus images blindly (i.e. no explicit knowledge of the aberrations).