Do you want to publish a course? Click here

Randomized Methods for Linear Constraints: Convergence Rates and Conditioning

92   0   0.0 ( 0 )
 Added by Dennis Leventhal
 Publication date 2008
and research's language is English




Ask ChatGPT about the research

We study randomized variants of two classical algorithms: coordinate descent for systems of linear equations and iterated projections for systems of linear inequalities. Expanding on a recent randomized iterated projection algorithm of Strohmer and Vershynin for systems of linear equations, we show that, under appropriate probability distributions, the linear rates of convergence (in expectation) can be bounded in terms of natural linear-algebraic condition numbers for the problems. We relate these condition measures to distances to ill-posedness, and discuss generalizations to convex systems under metric regularity assumptions.

rate research

Read More

The spectral bundle method proposed by Helmberg and Rendl is well established for solving large scale semidefinite programs (SDP) thanks to its low per iteration computational complexity and strong practical performance. In this paper, we revisit this classic method showing it achieves sublinear convergence rates in terms of both primal and dual SDPs under merely strong duality. Prior to this work, only limited dual guarantees were known. Moreover, we develop a novel variant, called the block spectral bundle method (Block-Spec), which not only enjoys the same convergence rate and low per iteration complexity, but also speeds up to linear convergence when the SDP admits strict complementarity. Numerically, we demonstrate the effectiveness of both methods, confirming our theoretical findings that the block spectral bundle method can substantially speed up convergence.
In this paper we study randomized optimal stopping problems and consider corresponding forward and backward Monte Carlo based optimisation algorithms. In particular we prove the convergence of the proposed algorithms and derive the corresponding convergence rates.
In this work, we analyze the global convergence property of coordinate gradient descent with random choice of coordinates and stepsizes for non-convex optimization problems. Under generic assumptions, we prove that the algorithm iterate will almost surely escape strict saddle points of the objective function. As a result, the algorithm is guaranteed to converge to local minima if all saddle points are strict. Our proof is based on viewing coordinate descent algorithm as a nonlinear random dynamical system and a quantitative finite block analysis of its linearization around saddle points.
128 - Adrian Sandu 2020
This paper studies fixed-step convergence of implicit-explicit general linear methods. We focus on a subclass of schemes that is internally consistent, has high stage order, and favorable stability properties. Classical, index-1 differential algebraic equation, and singular perturbation convergence analyses results are given. For all these problems IMEX GLMs from the class of interest converge with the full theoretical orders under general assumptions. The convergence results require the time steps to be sufficiently small, with upper bounds that are independent on the stiffness of the problem.
113 - Yangyang Xu 2020
First-order methods (FOMs) have recently been applied and analyzed for solving problems with complicated functional constraints. Existing works show that FOMs for functional constrained problems have lower-order convergence rates than those for unconstrained problems. In particular, an FOM for a smooth strongly-convex problem can have linear convergence, while it can only converge sublinearly for a constrained problem if the projection onto the constraint set is prohibited. In this paper, we point out that the slower convergence is caused by the large number of functional constraints but not the constraints themselves. When there are only $m=O(1)$ functional constraints, we show that an FOM can have almost the same convergence rate as that for solving an unconstrained problem, even without the projection onto the feasible set. In addition, given an $varepsilon>0$, we show that a complexity result that is better than a lower bound can be obtained, if there are only $m=o(varepsilon^{-frac{1}{2}})$ functional constraints. Our result is surprising but does not contradict to the existing lower complexity bound, because we focus on a specific subclass of problems. Experimental results on quadratically-constrained quadratic programs demonstrate our theory.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا