ﻻ يوجد ملخص باللغة العربية
Often in the analysis of first-order methods for both smooth and nonsmooth optimization, assuming the existence of a growth/error bound or a KL condition facilitates much stronger convergence analysis. Hence the analysis is done twice, once for the general case and once for the growth bounded case. We give meta-theorems for deriving general convergence rates from those assuming a growth lower bound. Applying this simple but conceptually powerful tool to the proximal point method, subgradient method, bundle method, gradient descent and universal accelerated method immediately recovers their known convergence rates for general convex optimization problems from their specialized rates. Our results apply to lift any rate based on Holder continuity of the objectives gradient and Holder growth bounds to apply to any problem with a weaker growth bound or when no growth bound is assumed.
Often in the analysis of first-order methods, assuming the existence of a quadratic growth bound (a generalization of strong convexity) facilitates much stronger convergence analysis. Hence the analysis is done twice, once for the general case and on
This work studies a class of non-smooth decentralized multi-agent optimization problems where the agents aim at minimizing a sum of local strongly-convex smooth components plus a common non-smooth term. We propose a general primal-dual algorithmic fr
We study convergence rates of the classic proximal bundle method for a variety of nonsmooth convex optimization problems. We show that, without any modification, this algorithm adapts to converge faster in the presence of smoothness or a Holder growt
We study growth rates for strongly continuous semigroups. We prove that a growth rate for the resolvent on imaginary lines implies a corresponding growth rate for the semigroup if either the underlying space is a Hilbert space, or the semigroup is as
We study randomized variants of two classical algorithms: coordinate descent for systems of linear equations and iterated projections for systems of linear inequalities. Expanding on a recent randomized iterated projection algorithm of Strohmer and V