ترغب بنشر مسار تعليمي؟ اضغط هنا

General Convergence Rates Follow From Specialized Rates Assuming Growth Bounds

59   0   0.0 ( 0 )
 نشر من قبل Benjamin Grimmer
 تاريخ النشر 2019
  مجال البحث
والبحث باللغة English
 تأليف Benjamin Grimmer




اسأل ChatGPT حول البحث

Often in the analysis of first-order methods, assuming the existence of a quadratic growth bound (a generalization of strong convexity) facilitates much stronger convergence analysis. Hence the analysis is done twice, once for the general case and once for the growth bounded case. We give a meta-theorem for deriving general convergence rates from those assuming a growth lower bound. Applying this simple but conceptually powerful tool to the proximal point method, the subgradient method, and the bundle method immediately recovers their known convergence rates for general convex optimization problems from their specialized rates. Future works studying first-order methods can assume growth bounds for the sake of analysis without hampering the generality of the results. Our results can be applied to lift any rate based on a Holder growth bound. As a consequence, guarantees for minimizing sharp functions imply guarantees for both general functions and those satisfying quadratic growth.

قيم البحث

اقرأ أيضاً

99 - Benjamin Grimmer 2021
Often in the analysis of first-order methods for both smooth and nonsmooth optimization, assuming the existence of a growth/error bound or a KL condition facilitates much stronger convergence analysis. Hence the analysis is done twice, once for the g eneral case and once for the growth bounded case. We give meta-theorems for deriving general convergence rates from those assuming a growth lower bound. Applying this simple but conceptually powerful tool to the proximal point method, subgradient method, bundle method, gradient descent and universal accelerated method immediately recovers their known convergence rates for general convex optimization problems from their specialized rates. Our results apply to lift any rate based on Holder continuity of the objectives gradient and Holder growth bounds to apply to any problem with a weaker growth bound or when no growth bound is assumed.
This work studies a class of non-smooth decentralized multi-agent optimization problems where the agents aim at minimizing a sum of local strongly-convex smooth components plus a common non-smooth term. We propose a general primal-dual algorithmic fr amework that unifies many existing state-of-the-art algorithms. We establish linear convergence of the proposed method to the exact solution in the presence of the non-smooth term. Moreover, for the more general class of problems with agent specific non-smooth terms, we show that linear convergence cannot be achieved (in the worst case) for the class of algorithms that uses the gradients and the proximal mappings of the smooth and non-smooth parts, respectively. We further provide a numerical counterexample that shows how some state-of-the-art algorithms fail to converge linearly for strongly-convex objectives and different local non-smooth terms.
We study convergence rates of the classic proximal bundle method for a variety of nonsmooth convex optimization problems. We show that, without any modification, this algorithm adapts to converge faster in the presence of smoothness or a Holder growt h condition. Our analysis reveals that with a constant stepsize, the bundle method is adaptive, yet it exhibits suboptimal convergence rates. We overcome this shortcoming by proposing nonconstant stepsize schemes with optimal rates. These schemes use function information such as growth constants, which might be prohibitive in practice. We complete the paper with a new parallelizable variant of the bundle method that attains near-optimal rates without prior knowledge of function parameters. These results improve on the limited existing convergence rates and provide a unified analysis approach across problem settings and algorithmic details. Numerical experiments support our findings and illustrate the effectiveness of the parallel bundle method.
107 - Jan Rozendaal , Mark Veraar 2017
We study growth rates for strongly continuous semigroups. We prove that a growth rate for the resolvent on imaginary lines implies a corresponding growth rate for the semigroup if either the underlying space is a Hilbert space, or the semigroup is as ymptotically analytic, or if the semigroup is positive and the underlying space is an $L^{p}$-space or a space of continuous functions. We also prove variations of the main results on fractional domains; these are valid on more general Banach spaces. In the second part of the article we apply our main theorem to prove optimality in a classical example by Renardy of a perturbed wave equation which exhibits unusual spectral behavior.
73 - D. Leventhal , A.S. Lewis 2008
We study randomized variants of two classical algorithms: coordinate descent for systems of linear equations and iterated projections for systems of linear inequalities. Expanding on a recent randomized iterated projection algorithm of Strohmer and V ershynin for systems of linear equations, we show that, under appropriate probability distributions, the linear rates of convergence (in expectation) can be bounded in terms of natural linear-algebraic condition numbers for the problems. We relate these condition measures to distances to ill-posedness, and discuss generalizations to convex systems under metric regularity assumptions.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا