The aim of this paper is to investigate the use of an entropic projection method for the iterative regularization of linear ill-posed problems. We derive a closed form solution for the iterates and analyze their convergence behaviour both in a case of reconstructing general nonnegative unknowns as well as for the sake of recovering probability distributions. Moreover, we discuss several variants of the algorithm and relations to other methods in the literature. The effectiveness of the approach is studied numerically in several examples.
Ill-posed linear inverse problems appear in many image processing applications, such as deblurring, super-resolution and compressed sensing. Many restoration strategies involve minimizing a cost function, which is composed of fidelity and prior terms, balanced by a regularization parameter. While a vast amount of research has been focused on different prior models, the fidelity term is almost always chosen to be the least squares (LS) objective, that encourages fitting the linearly transformed optimization variable to the observations. In this paper, we examine a different fidelity term, which has been implicitly used by the recently proposed iterative denoising and backward projections (IDBP) framework. This term encourages agreement between the projection of the optimization variable onto the row space of the linear operator and the pseudo-inverse of the linear operator (back-projection) applied on the observations. We analytically examine the difference between the two fidelity terms for Tikhonov regularization and identify cases (such as a badly conditioned linear operator) where the new term has an advantage over the standard LS one. Moreover, we demonstrate empirically that the behavior of the two induced cost functions for sophisticated convex and non-convex priors, such as total-variation, BM3D, and deep generative models, correlates with the obtained theoretical analysis.
Block coordinate descent (BCD) methods approach optimization problems by performing gradient steps along alternating subgroups of coordinates. This is in contrast to full gradient descent, where a gradient step updates all coordinates simultaneously. BCD has been demonstrated to accelerate the gradient method in many practical large-scale applications. Despite its success no convergence analysis for inverse problems is known so far. In this paper, we investigate the BCD method for solving linear inverse problems. As main theoretical result, we show that for operators having a particular tensor product form, the BCD method combined with an appropriate stopping criterion yields a convergent regularization method. To illustrate the theory, we perform numerical experiments comparing the BCD and the full gradient descent method for a system of integral equations. We also present numerical tests for a non-linear inverse problem not covered by our theory, namely one-step inversion in multi-spectral X-ray tomography.
The analysis of linear ill-posed problems often is carried out in function spaces using tools from functional analysis. However, the numerical solution of these problems typically is computed by first discretizing the problem and then applying tools from (finite-dimensional) linear algebra. The present paper explores the feasibility of applying the Chebfun package to solve ill-posed problems. This approach allows a user to work with functions instead of matrices. The solution process therefore is much closer to the analysis of ill-posed problems than standard linear algebra-based solution methods.
GMRES is one of the most popular iterative methods for the solution of large linear systems of equations that arise from the discretization of linear well-posed problems, such as Dirichlet boundary value problems for elliptic partial differential equations. The method is also applied to iteratively solve linear systems of equations that are obtained by discretizing linear ill-posed problems, such as many inverse problems. However, GMRES does not always perform well when applied to the latter kind of problems. This paper seeks to shed some light on reasons for the poor performance of GMRES in certain situations, and discusses some remedies based on specific kinds of preconditioning. The standard implementation of GMRES is based on the Arnoldi process, which also can be used to define a solution subspace for Tikhonov or TSVD regularization, giving rise to the Arnoldi-Tikhonov and Arnoldi-TSVD methods, respectively. The performance of the GMRES, the Arnoldi-Tikhonov, and the Arnoldi-TSVD methods is discussed. Numerical examples illustrate properties of these methods.
Nesterovs well-known scheme for accelerating gradient descent in convex optimization problems is adapted to accelerating stationary iterative solvers for linear systems. Compared with classical Krylov subspace acceleration methods, the proposed scheme requires more iterations, but it is trivial to implement and retains essentially the same computational cost as the unaccelerated method. An explicit formula for a fixed optimal parameter is derived in the case where the stationary iteration matrix has only real eigenvalues, based only on the smallest and largest eigenvalues. The fixed parameter, and corresponding convergence factor, are shown to maintain their optimality when the iteration matrix also has complex eigenvalues that are contained within an explicitly defined disk in the complex plane. A comparison to Chebyshev acceleration based on the same information of the smallest and largest real eigenvalues (dubbed Restricted Information Chebyshev acceleration) demonstrates that Nesterovs scheme is more robust in the sense that it remains optimal over a larger domain when the iteration matrix does have some complex eigenvalues. Numerical tests validate the efficiency of the proposed scheme. This work generalizes and extends the results of [1, Lemmas 3.1 and 3.2 and Theorem 3.3].