ترغب بنشر مسار تعليمي؟ اضغط هنا

Lower Bounds for Finding Stationary Points I

88   0   0.0 ( 0 )
 نشر من قبل Yair Carmon
 تاريخ النشر 2017
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

We prove lower bounds on the complexity of finding $epsilon$-stationary points (points $x$ such that $| abla f(x)| le epsilon$) of smooth, high-dimensional, and potentially non-convex functions $f$. We consider oracle-based complexity measures, where an algorithm is given access to the value and all derivatives of $f$ at a query point $x$. We show that for any (potentially randomized) algorithm $mathsf{A}$, there exists a function $f$ with Lipschitz $p$th order derivatives such that $mathsf{A}$ requires at least $epsilon^{-(p+1)/p}$ queries to find an $epsilon$-stationary point. Our lower bounds are sharp to within constants, and they show that gradient descent, cubic-regularized Newtons method, and generalized $p$th order regularization are worst-case optimal within their natural function classes.



قيم البحث

اقرأ أيضاً

We establish lower bounds on the complexity of finding $epsilon$-stationary points of smooth, non-convex high-dimensional functions using first-order methods. We prove that deterministic first-order methods, even applied to arbitrarily smooth functio ns, cannot achieve convergence rates in $epsilon$ better than $epsilon^{-8/5}$, which is within $epsilon^{-1/15}logfrac{1}{epsilon}$ of the best known rate for such methods. Moreover, for functions with Lipschitz first and second derivatives, we prove no deterministic first-order method can achieve convergence rates better than $epsilon^{-12/7}$, while $epsilon^{-2}$ is a lower bound for functions with only Lipschitz gradient. For convex functions with Lipschitz gradient, accelerated gradient descent achieves the rate $epsilon^{-1}logfrac{1}{epsilon}$, showing that finding stationary points is easier given convexity.
The problem of finding near-stationary points in convex optimization has not been adequately studied yet, unlike other optimality measures such as minimizing function value. Even in the deterministic case, the optimal method (OGM-G, due to Kim and Fe ssler (2021)) has just been discovered recently. In this work, we conduct a systematic study of the algorithmic techniques in finding near-stationary points of convex finite-sums. Our main contributions are several algorithmic discoveries: (1) we discover a memory-saving variant of OGM-G based on the performance estimation problem approach (Drori and Teboulle, 2014); (2) we design a new accelerated SVRG variant that can simultaneously achieve fast rates for both minimizing gradient norm and function value; (3) we propose an adaptively regularized accelerated SVRG variant, which does not require the knowledge of some unknown initial constants and achieves near-optimal complexities. We put an emphasis on the simplicity and practicality of the new schemes, which could facilitate future developments.
We provide the first non-asymptotic analysis for finding stationary points of nonsmooth, nonconvex functions. In particular, we study the class of Hadamard semi-differentiable functions, perhaps the largest class of nonsmooth functions for which the chain rule of calculus holds. This class contains examples such as ReLU neural networks and others with non-differentiable activation functions. We first show that finding an $epsilon$-stationary point with first-order methods is impossible in finite time. We then introduce the notion of $(delta, epsilon)$-stationarity, which allows for an $epsilon$-approximate gradient to be the convex combination of generalized gradients evaluated at points within distance $delta$ to the solution. We propose a series of randomized first-order methods and analyze their complexity of finding a $(delta, epsilon)$-stationary point. Furthermore, we provide a lower bound and show that our stochastic algorithm has min-max optimal dependence on $delta$. Empirically, our methods perform well for training ReLU neural networks.
We consider the problem of provably finding a stationary point of a smooth function to be minimized on the variety of bounded-rank matrices. While optimization on low-rank matrices has been extensively studied, existing algorithms do not provide such a basic guarantee. We trace this back to a geometric obstacle: On a nonsmooth set, there may be sequences of points along which standard measures of stationarity tend to zero, but whose limit point is not stationary. We name such events apocalypses, as they can cause optimization algorithms to converge to non-stationary points. We illustrate this on an existing algorithm using an explicit apocalypse on the bounded-rank matrix variety. To provably find stationary points, we modify a trust-region method on a standard smooth parameterization of the variety. The method relies on the known fact that second-order stationary points on the parameter space map to stationary points on the variety. Our geometric observations and proposed algorithm generalize beyond bounded-rank matrices. We give a geometric characterization of apocalypses on general constraint sets, which implies that Clarke-regular sets do not admit apocalypses. Such sets include smooth manifolds, manifolds with boundaries, and convex sets. Our trust-region method supports parameterization by any complete Riemannian manifold.
We lower bound the complexity of finding $epsilon$-stationary points (with gradient norm at most $epsilon$) using stochastic first-order methods. In a well-studied model where algorithms access smooth, potentially non-convex functions through queries to an unbiased stochastic gradient oracle with bounded variance, we prove that (in the worst case) any algorithm requires at least $epsilon^{-4}$ queries to find an $epsilon$ stationary point. The lower bound is tight, and establishes that stochastic gradient descent is minimax optimal in this model. In a more restrictive model where the noisy gradient estimates satisfy a mean-squared smoothness property, we prove a lower bound of $epsilon^{-3}$ queries, establishing the optimality of recently proposed variance reduction techniques.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا