ﻻ يوجد ملخص باللغة العربية
In this paper, we consider Nesterovs Accelerated Gradient method for solving Nonlinear Inverse and Ill-Posed Problems. Known to be a fast gradient-based iterative method for solving well-posed convex optimization problems, this method also leads to promising results for ill-posed problems. Here, we provide a convergence analysis for ill-posed problems of this method based on the assumption of a locally convex residual functional. Furthermore, we demonstrate the usefulness of the method on a number of numerical examples based on a nonlinear diagonal operator and on an inverse problem in auto-convolution.
Block coordinate descent (BCD) methods approach optimization problems by performing gradient steps along alternating subgroups of coordinates. This is in contrast to full gradient descent, where a gradient step updates all coordinates simultaneously.
The analysis of linear ill-posed problems often is carried out in function spaces using tools from functional analysis. However, the numerical solution of these problems typically is computed by first discretizing the problem and then applying tools
In this paper, we propose and analyze a fast two-point gradient algorithm for solving nonlinear ill-posed problems, which is based on the sequential subspace optimization method. A complete convergence analysis is provided under the classical assumpt
Regularization of ill-posed linear inverse problems via $ell_1$ penalization has been proposed for cases where the solution is known to be (almost) sparse. One way to obtain the minimizer of such an $ell_1$ penalized functional is via an iterative so
In multiple scientific and technological applications we face the problem of having low dimensional data to be justified by a linear model defined in a high dimensional parameter space. The difference in dimensionality makes the problem ill-defined: