ﻻ يوجد ملخص باللغة العربية
Nesterovs well-known scheme for accelerating gradient descent in convex optimization problems is adapted to accelerating stationary iterative solvers for linear systems. Compared with classical Krylov subspace acceleration methods, the proposed scheme requires more iterations, but it is trivial to implement and retains essentially the same computational cost as the unaccelerated method. An explicit formula for a fixed optimal parameter is derived in the case where the stationary iteration matrix has only real eigenvalues, based only on the smallest and largest eigenvalues. The fixed parameter, and corresponding convergence factor, are shown to maintain their optimality when the iteration matrix also has complex eigenvalues that are contained within an explicitly defined disk in the complex plane. A comparison to Chebyshev acceleration based on the same information of the smallest and largest real eigenvalues (dubbed Restricted Information Chebyshev acceleration) demonstrates that Nesterovs scheme is more robust in the sense that it remains optimal over a larger domain when the iteration matrix does have some complex eigenvalues. Numerical tests validate the efficiency of the proposed scheme. This work generalizes and extends the results of [1, Lemmas 3.1 and 3.2 and Theorem 3.3].
The aim of this paper is to investigate the use of an entropic projection method for the iterative regularization of linear ill-posed problems. We derive a closed form solution for the iterates and analyze their convergence behaviour both in a case o
First-order methods (FOMs) have recently been applied and analyzed for solving problems with complicated functional constraints. Existing works show that FOMs for functional constrained problems have lower-order convergence rates than those for uncon
Discrete variational methods have shown an excellent performance in numerical simulations of different mechanical systems. In this paper, we introduce an iterative method for discrete variational methods appropriate for boundary value problems. More
In PDE-constrained optimization, proper orthogonal decomposition (POD) provides a surrogate model of a (potentially expensive) PDE discretization, on which optimization iterations are executed. Because POD models usually provide good approximation qu
Quasi-Newton techniques approximate the Newton step by estimating the Hessian using the so-called secant equations. Some of these methods compute the Hessian using several secant equations but produce non-symmetric updates. Other quasi-Newton schemes