ترغب بنشر مسار تعليمي؟ اضغط هنا

A general framework for denoising phaseless diffraction measurements

48   0   0.0 ( 0 )
 نشر من قبل Stefano Marchesini
 تاريخ النشر 2016
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

We propose a general framework to recover underlying images from noisy phaseless diffraction measurements based on the alternating directional method of multipliers and the plug-and-play technique. The algorithm consists of three-step iterations: (i) Solving a generalized least square problem with the maximum a posteriori (MAP) estimate of the noise, (ii) Gaussian denoising and (iii) updating the multipliers. The denoising step utilizes higher order filters such as total generalized variation and nonlocal sparsity based filters including nonlocal mean (NLM) and Block-matching and 3D filtering (BM3D) filters. The multipliers are updated by a symmetric technique to increase convergence speed. The proposed method with low computational complexity is provided with theoretical convergence guarantee, and it enables recovering images with sharp edges, clean background and repetitive features from noisy phaseless measurements. Numerous numerical experiments for Fourier phase retrieval (PR) as coded diffraction and ptychographic patterns are performed to verify the convergence and efficiency, showing that our proposed method outperforms the state-of-art PR algorithms without any regularization and those with total variational regularization.



قيم البحث

اقرأ أيضاً

Phaseless diffraction measurements recorded by a CCD detector are often affected by Poisson noise. In this paper, we propose a dictionary learning model by employing patches based sparsity to denoise Poisson phaseless measurement. The model consists of three terms: (i) A representation term by an orthogonal dictionary, (ii) an $L^0$ pseudo norm of coefficient matrix, and (iii) a Kullback-Leibler divergence to fit phaseless Poisson data. Fast Alternating Minimization Method (AMM) and Proximal Alternating Linearized Minimization (PALM) are adopted to solve the established model with convergence guarantee, and especially global convergence for PALM is derived. The subproblems for two algorithms have fast solvers, and indeed, the solutions for the sparse coding and dictionary updating both have closed forms due to the orthogonality of learned dictionaries. Numerical experiments for phase retrieval using coded diffraction and ptychographic patterns are performed to show the efficiency and robustness of proposed methods, which, by preserving texture features, produce visually and quantitatively improved denoised images compared with other phase retrieval algorithms without regularization and local sparsity promoting algorithms.
For years, there has been interest in approximation methods for solving dynamic programming problems, because of the inherent complexity in computing optimal solutions characterized by Bellmans principle of optimality. A wide range of approximate dyn amic programming (ADP) methods now exists. It is of great interest to guarantee that the performance of an ADP scheme be at least some known fraction, say $beta$, of optimal. This paper introduces a general approach to bounding the performance of ADP methods, in this sense, in the stochastic setting. The approach is based on new results for bounding greedy solutions in string optimization problems, where one has to choose a string (ordered set) of actions to maximize an objective function. This bounding technique is inspired by submodularity theory, but submodularity is not required for establishing bounds. Instead, the bounding is based on quantifying certain notions of curvature of string functions; the smaller the curvatures the better the bound. The key insight is that any ADP scheme is a greedy scheme for some surrogate string objective function that coincides in its optimal solution and value with those of the original optimal control problem. The ADP scheme then yields to the bounding technique mentioned above, and the curvatures of the surrogate objective determine the value $beta$ of the bound. The surrogate objective and its curvatures depend on the specific ADP.
One of the challenges in analyzing a learning algorithm is the circular entanglement between the objective value and the stochastic noise. This is also known as the chicken and egg phenomenon. Traditionally, people tackle this issue with the special structure of the problem and hence the analysis is difficult to generalize. In this paper, we present a general framework for analyzing high-probability bounds for stochastic dynamics in learning algorithms. Our framework composes standard techniques from probability theory to give a streamlined three-step recipe with a general and flexible principle to tackle the chicken and egg problem. We demonstrate the power and the flexibility of our framework by giving unifying analysis for three very different learning problems with both the last iterate and the strong uniform high probability convergence guarantee. The problems are stochastic gradient descent for strongly convex functions, streaming principal component analysis and linear bandit with stochastic gradient descent updates. We either improve or match the state-of-the-art bounds on all three dynamics.
This paper studies a strategy for data-driven algorithm design for large-scale combinatorial optimization problems that can leverage existing state-of-the-art solvers in general purpose ways. The goal is to arrive at new approaches that can reliably outperform existing solvers in wall-clock time. We focus on solving integer programs, and ground our approach in the large neighborhood search (LNS) paradigm, which iteratively chooses a subset of variables to optimize while leaving the remainder fixed. The appeal of LNS is that it can easily use any existing solver as a subroutine, and thus can inherit the benefits of carefully engineered heuristic or complete approaches and their software implementations. We show that one can learn a good neighborhood selector using imitation and reinforcement learning techniques. Through an extensive empirical validation in bounded-time optimization, we demonstrate that our LNS framework can significantly outperform compared to state-of-the-art commercial solvers such as Gurobi.
Benchmarks in the utility function have various interpretations, including performance guarantees and risk constraints in fund contracts and reference levels in cumulative prospect theory. In most literature, benchmarks are a deterministic constant o r a fraction of the underlying wealth; as such, the utility is still a univariate function of the wealth. In this paper, we propose a framework of multivariate utility optimization with general benchmark variables, which include stochastic reference levels as typical examples. The utility is state-dependent and the objective is no longer distribution-invariant. We provide the optimal solution(s) and fully investigate the issues of well-posedness, feasibility, finiteness and attainability. The discussion does not require many classic conditions and assumptions, e.g., the Lagrange multiplier always exists. Moreover, several surprising phenomena and technical difficulties may appear: (i) non-uniqueness of the optimal solutions, (ii) various reasons for non-existence of the Lagrangian multiplier and corresponding results on the optimal solution, (iii) measurability issues of the concavification of a multivariate utility and the selection of the optimal solutions, and (iv) existence of an optimal solution not decreasing with respect to the pricing kernel. These issues are thoroughly addressed, rigorously proved, completely summarized and insightfully visualized. As an application, the framework is adopted to model and solve a constraint utility optimization problem with state-dependent performance and risk benchmarks.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا