ترغب بنشر مسار تعليمي؟ اضغط هنا

Accelerated Projected Gradient Method for Linear Inverse Problems with Sparsity Constraints

193   0   0.0 ( 0 )
 نشر من قبل Ignace Loris
 تاريخ النشر 2008
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

Regularization of ill-posed linear inverse problems via $ell_1$ penalization has been proposed for cases where the solution is known to be (almost) sparse. One way to obtain the minimizer of such an $ell_1$ penalized functional is via an iterative soft-thresholding algorithm. We propose an alternative implementation to $ell_1$-constraints, using a gradient method, with projection on $ell_1$-balls. The corresponding algorithm uses again iterative soft-thresholding, now with a variable thresholding parameter. We also propose accelerat



قيم البحث

اقرأ أيضاً

In this paper, we consider Nesterovs Accelerated Gradient method for solving Nonlinear Inverse and Ill-Posed Problems. Known to be a fast gradient-based iterative method for solving well-posed convex optimization problems, this method also leads to p romising results for ill-posed problems. Here, we provide a convergence analysis for ill-posed problems of this method based on the assumption of a locally convex residual functional. Furthermore, we demonstrate the usefulness of the method on a number of numerical examples based on a nonlinear diagonal operator and on an inverse problem in auto-convolution.
We study Bayesian inference methods for solving linear inverse problems, focusing on hierarchical formulations where the prior or the likelihood function depend on unspecified hyperparameters. In practice, these hyperparameters are often determined v ia an empirical Bayesian method that maximizes the marginal likelihood function, i.e., the probability density of the data conditional on the hyperparameters. Evaluating the marginal likelihood, however, is computationally challenging for large-scale problems. In this work, we present a method to approximately evaluate marginal likelihood functions, based on a low-rank approximation of the update from the prior covariance to the posterior covariance. We show that this approximation is optimal in a minimax sense. Moreover, we provide an efficient algorithm to implement the proposed method, based on a combination of the randomized SVD and a spectral approximation method to compute square roots of the prior covariance matrix. Several numerical examples demonstrate good performance of the proposed method.
The linear equations that arise in interior methods for constrained optimization are sparse symmetric indefinite and become extremely ill-conditioned as the interior method converges. These linear systems present a challenge for existing solver frame works based on sparse LU or LDL^T decompositions. We benchmark five well known direct linear solver packages using matrices extracted from power grid optimization problems. The achieved solution accuracy varies greatly among the packages. None of the tested packages delivers significant GPU acceleration for our test cases.
150 - Xiaodong Liu , Shixu Meng 2021
We consider the inverse source problems with multi-frequency sparse near field measurements. In contrast to the existing near field operator based on the integral over the space variable, a multi-frequency near field operator is introduced based on t he integral over the frequency variable. A factorization of this multi-frequency near field operator is further given and analysed. Motivated by such a factorization, we introduce a multi-frequency sampling method to reconstruct the source support. Its theoretical foundation is then derived from the properties of the factorized operators and a properly chosen point spread function. Numerical examples are provided to illustrate the multi-frequency sampling method with sparse near field measurements. Finally we briefly discuss how to extend the near field case to the far field case.
We introduce a framework, which we denote as the augmented estimate sequence, for deriving fast algorithms with provable convergence guarantees. We use this framework to construct a new first-order scheme, the Accelerated Composite Gradient Method (A CGM), for large-scale problems with composite objective structure. ACGM surpasses the state-of-the-art methods for this problem class in terms of provable convergence rate, both in the strongly and non-strongly convex cases, and is endowed with an efficient step size search procedure. We support the effectiveness of our new method with simulation results.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا