No Arabic abstract
In this paper, we consider Nesterovs Accelerated Gradient method for solving Nonlinear Inverse and Ill-Posed Problems. Known to be a fast gradient-based iterative method for solving well-posed convex optimization problems, this method also leads to promising results for ill-posed problems. Here, we provide a convergence analysis for ill-posed problems of this method based on the assumption of a locally convex residual functional. Furthermore, we demonstrate the usefulness of the method on a number of numerical examples based on a nonlinear diagonal operator and on an inverse problem in auto-convolution.
Block coordinate descent (BCD) methods approach optimization problems by performing gradient steps along alternating subgroups of coordinates. This is in contrast to full gradient descent, where a gradient step updates all coordinates simultaneously. BCD has been demonstrated to accelerate the gradient method in many practical large-scale applications. Despite its success no convergence analysis for inverse problems is known so far. In this paper, we investigate the BCD method for solving linear inverse problems. As main theoretical result, we show that for operators having a particular tensor product form, the BCD method combined with an appropriate stopping criterion yields a convergent regularization method. To illustrate the theory, we perform numerical experiments comparing the BCD and the full gradient descent method for a system of integral equations. We also present numerical tests for a non-linear inverse problem not covered by our theory, namely one-step inversion in multi-spectral X-ray tomography.
The analysis of linear ill-posed problems often is carried out in function spaces using tools from functional analysis. However, the numerical solution of these problems typically is computed by first discretizing the problem and then applying tools from (finite-dimensional) linear algebra. The present paper explores the feasibility of applying the Chebfun package to solve ill-posed problems. This approach allows a user to work with functions instead of matrices. The solution process therefore is much closer to the analysis of ill-posed problems than standard linear algebra-based solution methods.
In this paper, we propose and analyze a fast two-point gradient algorithm for solving nonlinear ill-posed problems, which is based on the sequential subspace optimization method. A complete convergence analysis is provided under the classical assumptions for iterative regularization methods. The design of the two-point gradient method involves the choices of the combination parameters which is systematically discussed. Furthermore, detailed numerical simulations are presented for inverse potential problem, which exhibit that the proposed method leads to a strongly decrease of the iteration numbers and the overall computational time can be significantly reduced.
Regularization of ill-posed linear inverse problems via $ell_1$ penalization has been proposed for cases where the solution is known to be (almost) sparse. One way to obtain the minimizer of such an $ell_1$ penalized functional is via an iterative soft-thresholding algorithm. We propose an alternative implementation to $ell_1$-constraints, using a gradient method, with projection on $ell_1$-balls. The corresponding algorithm uses again iterative soft-thresholding, now with a variable thresholding parameter. We also propose accelerat
In multiple scientific and technological applications we face the problem of having low dimensional data to be justified by a linear model defined in a high dimensional parameter space. The difference in dimensionality makes the problem ill-defined: the model is consistent with the data for many values of its parameters. The objective is to find the probability distribution of parameter values consistent with the data, a problem that can be cast as the exploration of a high dimensional convex polytope. In this work we introduce a novel algorithm to solve this problem efficiently. It provides results that are statistically indistinguishable from currently used numerical techniques while its running time scales linearly with the system size. We show that the algorithm performs robustly in many abstract and practical applications. As working examples we simulate the effects of restricting reaction fluxes on the space of feasible phenotypes of a {em genome} scale E. Coli metabolic network and infer the traffic flow between origin and destination nodes in a real communication network.