ترغب بنشر مسار تعليمي؟ اضغط هنا

Learning to Synthesize: Robust Phase Retrieval at Low Photon counts

344   0   0.0 ( 0 )
 نشر من قبل Mo Deng
 تاريخ النشر 2019
والبحث باللغة English




اسأل ChatGPT حول البحث

The quality of inverse problem solutions obtained through deep learning [Barbastathis et al, 2019] is limited by the nature of the priors learned from examples presented during the training phase. In the case of quantitative phase retrieval [Sinha et al, 2017, Goy et al, 2019], in particular, spatial frequencies that are underrepresented in the training database, most often at the high band, tend to be suppressed in the reconstruction. Ad hoc solutions have been proposed, such as pre-amplifying the high spatial frequencies in the examples [Li et al, 2018]; however, while that strategy improves resolution, it also leads to high-frequency artifacts as well as low-frequency distortions in the reconstructions. Here, we present a new approach that learns separately how to handle the two frequency bands, low and high; and also learns how to synthesize these two bands into the full-band reconstructions. We show that this learning to synthesize (LS) method yields phase reconstructions of high spatial resolution and artifact-free; and it is also resilient to high-noise conditions, e.g. in the case of very low photon flux. In addition to the problem of quantitative phase retrieval, the LS method is applicable, in principle, to any inverse problem where the forward operator treats different frequency bands unevenly, i.e. is ill-posed.



قيم البحث

اقرأ أيضاً

71 - Zikui Cai , Rakib Hyder , 2020
Signal recovery from nonlinear measurements involves solving an iterative optimization problem. In this paper, we present a framework to optimize the sensing parameters to improve the quality of the signal recovered by the given iterative method. In particular, we learn illumination patterns to recover signals from coded diffraction patterns using a fixed-cost alternating minimization-based phase retrieval method. Coded diffraction phase retrieval is a physically realistic system in which the signal is first modulated by a sequence of codes before the sensor records its Fourier amplitude. We represent the phase retrieval method as an unrolled network with a fixed number of layers and minimize the recovery error by optimizing over the measurement parameters. Since the number of iterations/layers are fixed, the recovery incurs a fixed cost. We present extensive simulation results on a variety of datasets under different conditions and a comparison with existing methods. Our results demonstrate that the proposed method provides near-perfect reconstruction using patterns learned with a small number of training images. Our proposed method provides significant improvements over existing methods both in terms of accuracy and speed.
While characterization of coherent wavefields is essential to laser, x-ray and electron imaging, sensors measure the squared magnitude of the field, rather than the field itself. Holography or phase retrieval must be used to characterize the field. T he need for a reference severely restricts the utility of holography. Phase retrieval, in contrast, is theoretically consistent with sensors that directly measure coherent or partially coherent fields with no prior assumptions. Unfortunately, phase retrieval has not yet been successfully implemented for large-scale fields. Here we show that both holography and phase retrieval are capable of quantum-limited coherent signal estimation and we describe phase retrieval strategies that approach the quantum limit for >1 megapixel fields. These strategies rely on group testing using networks of interferometers, such as might be constructed using emerging integrated photonic, plasmonic and/or metamaterial devices. Phase-sensitive sensor planes using such devices could eliminate the need both for lenses and reference signals, creating a path to large aperture diffraction limited laser imaging.
128 - Wu Shi , Tak-Wai Hui , Ziwei Liu 2019
Existing unconditional generative models mainly focus on modeling general objects, such as faces and indoor scenes. Fashion textures, another important type of visual elements around us, have not been extensively studied. In this work, we propose an effective generative model for fashion textures and also comprehensively investigate the key components involved: internal representation, latent space sampling and the generator architecture. We use Gram matrix as a suitable internal representation for modeling realistic fashion textures, and further design two dedicated modules for modulating Gram matrix into a low-dimension vector. Since fashion textures are scale-dependent, we propose a recursive auto-encoder to capture the dependency between multiple granularity levels of texture feature. Another important observation is that fashion textures are multi-modal. We fit and sample from a Gaussian mixture model in the latent space to improve the diversity of the generated textures. Extensive experiments demonstrate that our approach is capable of synthesizing more realistic and diverse fashion textures over other state-of-the-art methods.
Phase retrieval approaches based on DL provide a framework to obtain phase information from an intensity hologram or diffraction pattern in a robust manner and in real time. However, current DL architectures applied to the phase problem rely i) on pa ired datasets, i.e., they are only applicable when a satisfactory solution of the phase problem has been found, and ii) on the fact that most of them ignore the physics of the imaging process. Here, we present PhaseGAN, a new DL approach based on Generative Adversarial Networks, which allows the use of unpaired datasets and includes the physics of image formation. Performance of our approach is enhanced by including the image formation physics and provides phase reconstructions when conventional phase retrieval algorithms fail, such as ultra-fast experiments. Thus, PhaseGAN offers the opportunity to address the phase problem when no phase reconstructions are available, but good simulations of the object or data from other experiments are available, enabling us to obtain results not possible before.
122 - Zihui Wu , Yu Sun , Jiaming Liu 2019
Regularization by denoising (RED) is a powerful framework for solving imaging inverse problems. Most RED algorithms are iterative batch procedures, which limits their applicability to very large datasets. In this paper, we address this limitation by introducing a novel online RED (On-RED) algorithm, which processes a small subset of the data at a time. We establish the theoretical convergence of On-RED in convex settings and empirically discuss its effectiveness in non-convex ones by illustrating its applicability to phase retrieval. Our results suggest that On-RED is an effective alternative to the traditional RED algorithms when dealing with large datasets.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا