ترغب بنشر مسار تعليمي؟ اضغط هنا

Continuous-time Lower Bounds for Gradient-based Algorithms

89   0   0.0 ( 0 )
 نشر من قبل Michael Muehlebach
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

This article derives lower bounds on the convergence rate of continuous-time gradient-based optimization algorithms. The algorithms are subjected to a time-normalization constraint that avoids a reparametrization of time in order to make the discussion of continuous-time convergence rates meaningful. We reduce the multi-dimensional problem to a single dimension, recover well-known lower bounds from the discrete-time setting, and provide insight into why these lower bounds occur. We present algorithms that achieve the proposed lower bounds, even when the function class under consideration includes certain nonconvex functions.



قيم البحث

اقرأ أيضاً

This paper considers the problem of designing accelerated gradient-based algorithms for optimization and saddle-point problems. The class of objective functions is defined by a generalized sector condition. This class of functions contains strongly c onvex functions with Lipschitz gradients but also non-convex functions, which allows not only to address optimization problems but also saddle-point problems. The proposed design procedure relies on a suitable class of Lyapunov functions and on convex semi-definite programming. The proposed synthesis allows the design of algorithms that reach the performance of state-of-the-art accelerated gradient methods and beyond.
199 - Xin Chen , Jorge I. Poveda , Na Li 2021
In power distribution systems, the growing penetration of renewable energy resources brings new challenges to maintaining voltage safety, which is further complicated by the limited model information of distribution systems. To address these challeng es, we develop a model-free optimal voltage control algorithm based on projected primal-dual gradient dynamics and continuous-time zeroth-order method (extreme seeking control). This proposed algorithm i) operates purely based on voltage measurements and does not require any other model information, ii) can drive the voltage magnitudes back to the acceptable range, iii) satisfies the power capacity constraints all the time, iv) minimizes the total operating cost, and v) is implemented in a decentralized fashion where the privacy of controllable devices is preserved and plug-and-play operation is enabled. We prove that the proposed algorithm is semi-globally practically asymptotically stable and is structurally robust to measurement noises. Lastly, the performance of the proposed algorithm is further demonstrated via numerical simulations.
Small-scale Mixed-Integer Quadratic Programming (MIQP) problems often arise in embedded control and estimation applications. Driven by the need for algorithmic simplicity to target computing platforms with limited memory and computing resources, this paper proposes a few approaches to solving MIQPs, either to optimality or suboptimally. We specialize an existing Accelerated Dual Gradient Projection (GPAD) algorithm to effectively solve the Quadratic Programming (QP) relaxation that arise during Branch and Bound (B&B) and propose a generic framework to warm-start the binary variables which reduces the number of QP relaxations. Moreover, in order to find an integer feasible combination of the binary variables upfront, two heuristic approaches are presented: ($i$) without using B&B, and ($ii$) using B&B with a significantly reduced number of QP relaxations. Both heuristic approaches return an integer feasible solution that may be suboptimal but involve a much reduced computation effort. Such a feasible solution can be either implemented directly or used to set an initial upper bound on the optimal cost in B&B. Through different hybrid control and estimation examples involving binary decision variables, we show that the performance of the proposed methods, although very simple to code, is comparable to that of state-of-the-art MIQP solvers.
Conic optimization is the minimization of a differentiable convex objective function subject to conic constraints. We propose a novel primal-dual first-order method for conic optimization, named proportional-integral projected gradient method (PIPG). PIPG ensures that both the primal-dual gap and the constraint violation converge to zero at the rate of (O(1/k)), where (k) is the number of iterations. If the objective function is strongly convex, PIPG improves the convergence rate of the primal-dual gap to (O(1/k^2)). Further, unlike any existing first-order methods, PIPG also improves the convergence rate of the constraint violation to (O(1/k^3)). We demonstrate the application of PIPG in constrained optimal control problems.
170 - Zhiyan Ding , Yiding Chen , Qin Li 2019
Stochastic Gradient Descent (SGD) plays a central role in modern machine learning. While there is extensive work on providing error upper bound for SGD, not much is known about SGD error lower bound. In this paper, we study the convergence of constan t step-size SGD. We provide error lower bound of SGD for potentially non-convex objective functions with Lipschitz gradients. To our knowledge, this is the first analysis for SGD error lower bound without the strong convexity assumption. We use experiments to illustrate our theoretical results.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا