ﻻ يوجد ملخص باللغة العربية
First-order methods (FOMs) have recently been applied and analyzed for solving problems with complicated functional constraints. Existing works show that FOMs for functional constrained problems have lower-order convergence rates than those for unconstrained problems. In particular, an FOM for a smooth strongly-convex problem can have linear convergence, while it can only converge sublinearly for a constrained problem if the projection onto the constraint set is prohibited. In this paper, we point out that the slower convergence is caused by the large number of functional constraints but not the constraints themselves. When there are only $m=O(1)$ functional constraints, we show that an FOM can have almost the same convergence rate as that for solving an unconstrained problem, even without the projection onto the feasible set. In addition, given an $varepsilon>0$, we show that a complexity result that is better than a lower bound can be obtained, if there are only $m=o(varepsilon^{-frac{1}{2}})$ functional constraints. Our result is surprising but does not contradict to the existing lower complexity bound, because we focus on a specific subclass of problems. Experimental results on quadratically-constrained quadratic programs demonstrate our theory.
Nesterovs well-known scheme for accelerating gradient descent in convex optimization problems is adapted to accelerating stationary iterative solvers for linear systems. Compared with classical Krylov subspace acceleration methods, the proposed schem
We provide a condition-based analysis of two interior-point methods for unconstrained geometric programs, a class of convex programs that arise naturally in applications including matrix scaling, matrix balancing, and entropy maximization. Our condit
First-order methods (FOMs) have been widely used for solving large-scale problems. A majority of existing works focus on problems without constraint or with simple constraints. Several recent works have studied FOMs for problems with complicated func
The aim of this paper is to investigate the use of an entropic projection method for the iterative regularization of linear ill-posed problems. We derive a closed form solution for the iterates and analyze their convergence behaviour both in a case o
We study randomized variants of two classical algorithms: coordinate descent for systems of linear equations and iterated projections for systems of linear inequalities. Expanding on a recent randomized iterated projection algorithm of Strohmer and V