ﻻ يوجد ملخص باللغة العربية
We develop a distributed algorithm for convex Empirical Risk Minimization, the problem of minimizing large but finite sum of convex functions over networks. The proposed algorithm is derived from directly discretizing the second-order heavy-ball differential equation and results in an accelerated convergence rate, i.e, faster than distributed gradient descent-based methods for strongly convex objectives that may not be smooth. Notably, we achieve acceleration without resorting to the well-known Nesterovs momentum approach. We provide numerical experiments and contrast the proposed method with recently proposed optimal distributed optimization algorithms.
We study gradient-based optimization methods obtained by direct Runge-Kutta discretization of the ordinary differential equation (ODE) describing the movement of a heavy-ball under constant friction coefficient. When the function is high order smooth
A collection of optimization problems central to power system operation requires distributed solution architectures to avoid the need for aggregation of all information at a central location. In this paper, we study distributed dual subgradient metho
There has been work on exploiting polynomial approximation to solve distributed nonconvex optimization problems involving univariate objectives. This idea facilitates arbitrarily precise global optimization without requiring local evaluations of grad
We focus on the solutions of second-order stable linear difference equations and demonstrate that their behavior can be non-monotone and exhibit peak effects depending on initial conditions. The results are applied to the analysis of the accelerated
We study novel robust zero-order algorithms with acceleration for the solution of real-time optimization problems. In particular, we propose a family of extremum seeking dynamics that can be universally modeled as singularly perturbed hybrid dynamica