Do you want to publish a course? Click here

Non-Convex Structured Phase Retrieval

130   0   0.0 ( 0 )
 Added by Namrata Vaswani
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Phase retrieval (PR), also sometimes referred to as quadratic sensing, is a problem that occurs in numerous signal and image acquisition domains ranging from optics, X-ray crystallography, Fourier ptychography, sub-diffraction imaging, and astronomy. In each of these domains, the physics of the acquisition system dictates that only the magnitude (intensity) of certain linear projections of the signal or image can be measured. Without any assumptions on the unknown signal, accurate recovery necessarily requires an over-complete set of measurements. The only way to reduce the measurements/sample complexity is to place extra assumptions on the unknown signal/image. A simple and practically valid set of assumptions is obtained by exploiting the structure inherently present in many natural signals or sequences of signals. Two commonly used structural assumptions are (i) sparsity of a given signal/image or (ii) a low rank model on the matrix formed by a set, e.g., a time sequence, of signals/images. Both have been explored for solving the PR problem in a sample-efficient fashion. This article describes this work, with a focus on non-convex approaches that come with sample complexity guarantees under simple assumptions. We also briefly describe other different types of structural assumptions that have been used in recent literature.



rate research

Read More

Efficiently representing real world data in a succinct and parsimonious manner is of central importance in many fields. We present a generalized greedy pursuit framework, allowing us to efficiently solve structured matrix factorization problems, where the factors are allowed to be from arbitrary sets of structured vectors. Such structure may include sparsity, non-negativeness, order, or a combination thereof. The algorithm approximates a given matrix by a linear combination of few rank-1 matrices, each factorized into an outer product of two vector atoms of the desired structure. For the non-convex subproblems of obtaining good rank-1 structured matrix atoms, we employ and analyze a general atomic power method. In addition to the above applications, we prove linear convergence for generalized pursuit variants in Hilbert spaces - for the task of approximation over the linear span of arbitrary dictionaries - which generalizes OMP and is useful beyond matrix problems. Our experiments on real datasets confirm both the efficiency and also the broad applicability of our framework in practice.
Phase retrieval (PR) is an important component in modern computational imaging systems. Many algorithms have been developed over the past half century. Recent advances in deep learning have opened up a new possibility for robust and fast PR. An emerging technique, called deep unfolding, provides a systematic connection between conventional model-based iterative algorithms and modern data-based deep learning. Unfolded algorithms, powered by data learning, have shown remarkable performance and convergence speed improvement over the original algorithms. Despite their potential, most existing unfolded algorithms are strictly confined to a fixed number of iterations when employing layer-dependent parameters. In this study, we develop a novel framework for deep unfolding to overcome the existing limitations. Even if our framework can be widely applied to general inverse problems, we take PR as an example in the paper. Our development is based on an unfolded generalized expectation consistent signal recovery (GEC-SR) algorithm, wherein damping factors are left for data-driven learning. In particular, we introduce a hypernetwork to generate the damping factors for GEC-SR. Instead of directly learning a set of optimal damping factors, the hypernetwork learns how to generate the optimal damping factors according to the clinical settings, thus ensuring its adaptivity to different scenarios. To make the hypernetwork work adapt to varying layer numbers, we use a recurrent architecture to develop a dynamic hypernetwork, which generates a damping factor that can vary online across layers. We also exploit a self-attention mechanism to enhance the robustness of the hypernetwork. Extensive experiments show that the proposed algorithm outperforms existing ones in convergence speed and accuracy, and still works well under very harsh settings, that many classical PR algorithms unstable or even fail.
Phase retrieval deals with the estimation of complex-valued signals solely from the magnitudes of linear measurements. While there has been a recent explosion in the development of phase retrieval algorithms, the lack of a common interface has made it difficult to compare new methods against the state-of-the-art. The purpose of PhasePack is to create a common software interface for a wide range of phase retrieval algorithms and to provide a common testbed using both synthetic data and empirical imaging datasets. PhasePack is able to benchmark a large number of recent phase retrieval methods against one another to generate comparisons using a range of different performance metrics. The software package handles single method testing as well as multiple method comparisons. The algorithm implementations in PhasePack differ slightly from their original descriptions in the literature in order to achieve faster speed and improved robustness. In particular, PhasePack uses adaptive stepsizes, line-search methods, and fast eigensolvers to speed up and automate convergence.
124 - Shelby Kimmel , Yi-Kai Liu 2015
We consider a variant of the phase retrieval problem, where vectors are replaced by unitary matrices, i.e., the unknown signal is a unitary matrix U, and the measurements consist of squared inner products |Tr(C*U)|^2 with unitary matrices C that are chosen by the observer. This problem has applications to quantum process tomography, when the unknown process is a unitary operation. We show that PhaseLift, a convex programming algorithm for phase retrieval, can be adapted to this matrix setting, using measurements that are sampled from unitary 4- and 2-designs. In the case of unitary 4-design measurements, we show that PhaseLift can reconstruct all unitary matrices, using a near-optimal number of measurements. This extends previous work on PhaseLift using spherical 4-designs. In the case of unitary 2-design measurements, we show that PhaseLift still works pretty well on average: it recovers almost all signals, up to a constant additive error, using a near-optimal number of measurements. These 2-design measurements are convenient for quantum process tomography, as they can be implemented via randomized benchmarking techniques. This is the first positive result on PhaseLift using 2-designs.
Recent years have seen a flurry of activities in designing provably efficient nonconvex procedures for solving statistical estimation problems. Due to the highly nonconvex nature of the empirical loss, state-of-the-art procedures often require proper regularization (e.g. trimming, regularized cost, projection) in order to guarantee fast convergence. For vanilla procedures such as gradient descent, however, prior theory either recommends highly conservative learning rates to avoid overshooting, or completely lacks performance guarantees. This paper uncovers a striking phenomenon in nonconvex optimization: even in the absence of explicit regularization, gradient descent enforces proper regularization implicitly under various statistical models. In fact, gradient descent follows a trajectory staying within a basin that enjoys nice geometry, consisting of points incoherent with the sampling mechanism. This implicit regularization feature allows gradient descent to proceed in a far more aggressive fashion without overshooting, which in turn results in substantial computational savings. Focusing on three fundamental statistical estimation problems, i.e. phase retrieval, low-rank matrix completion, and blind deconvolution, we establish that gradient descent achieves near-optimal statistical and computational guarantees without explicit regularization. In particular, by marrying statistical modeling with generic optimization theory, we develop a general recipe for analyzing the trajectories of iterative algorithms via a leave-one-out perturbation argument. As a byproduct, for noisy matrix completion, we demonstrate that gradient descent achieves near-optimal error control --- measured entrywise and by the spectral norm --- which might be of independent interest.

suggested questions

comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا