Do you want to publish a course? Click here

Achieving Acceleration in Distributed Optimization via Direct Discretization of the Heavy-Ball ODE

72   0   0.0 ( 0 )
 Added by Jingzhao Zhang
 Publication date 2018
  fields
and research's language is English




Ask ChatGPT about the research

We develop a distributed algorithm for convex Empirical Risk Minimization, the problem of minimizing large but finite sum of convex functions over networks. The proposed algorithm is derived from directly discretizing the second-order heavy-ball differential equation and results in an accelerated convergence rate, i.e, faster than distributed gradient descent-based methods for strongly convex objectives that may not be smooth. Notably, we achieve acceleration without resorting to the well-known Nesterovs momentum approach. We provide numerical experiments and contrast the proposed method with recently proposed optimal distributed optimization algorithms.



rate research

Read More

We study gradient-based optimization methods obtained by direct Runge-Kutta discretization of the ordinary differential equation (ODE) describing the movement of a heavy-ball under constant friction coefficient. When the function is high order smooth and strongly convex, we show that directly simulating the ODE with known numerical integrators achieve acceleration in a nontrivial neighborhood of the optimal solution. In particular, the neighborhood can grow larger as the condition number of the function increases. Furthermore, our results also hold for nonconvex but quasi-strongly convex objectives. We provide numerical experiments that verify the theoretical rates predicted by our results.
A collection of optimization problems central to power system operation requires distributed solution architectures to avoid the need for aggregation of all information at a central location. In this paper, we study distributed dual subgradient methods to solve three such optimization problems. Namely, these are tie-line scheduling in multi-area power systems, coordination of distributed energy resources in radial distribution networks, and joint dispatch of transmission and distribution assets. With suitable relaxations or approximations of the power flow equations, all three problems can be reduced to a multi-agent constrained convex optimization problem. We utilize a constant step-size dual subgradient method with averaging on these problems. For this algorithm, we provide a convergence guarantee that is shown to be order-optimal. We illustrate its application on the grid optimization problems.
There has been work on exploiting polynomial approximation to solve distributed nonconvex optimization problems involving univariate objectives. This idea facilitates arbitrarily precise global optimization without requiring local evaluations of gradients at every iteration. Nonetheless, there remains a gap between existing theoretical guarantees and diverse practical requirements for dependability, notably privacy preservation and robustness to network imperfections (e.g., time-varying directed communication and asynchrony). To fill this gap and keep the above strengths, we propose a Dependable Chebyshev-Proxy-based distributed Optimization Algorithm (D-CPOA). Specifically, to ensure both accuracy of solutions and privacy of local objectives, a new privacy-preserving mechanism is designed. This mechanism leverages the randomness in blockwise insertions of perturbed vector states and hence provides an improved privacy guarantee compared to the literature in terms of ($alpha,beta$)-data-privacy. Furthermore, to gain robustness to various network imperfections, we use the push-sum consensus protocol as a backbone, discuss its specific enhancements, and evaluate the performance of the proposed algorithm accordingly. Thanks to the linear consensus-based structure of iterations, we avoid the privacy-accuracy trade-off and the bother of selecting appropriate step-sizes in different settings. We provide rigorous analysis of the accuracy, dependability and complexity. It is shown that the advantages brought by the idea of polynomial approximation are maintained when all the above requirements exist. Simulations demonstrate the effectiveness of the developed algorithm.
We focus on the solutions of second-order stable linear difference equations and demonstrate that their behavior can be non-monotone and exhibit peak effects depending on initial conditions. The results are applied to the analysis of the accelerated unconstrained optimization method -- the Heavy Ball method. We explain non-standard behavior of the method discovered in practical applications. In addition, such non-monotonicity complicates the correct choice of the parameters in optimization methods. We propose to overcome this difficulty by introducing new Lyapunov function which should decrease monotonically. By use of this function convergence of the method is established under less restrictive assumptions (for instance, with the lack of convexity). We also suggest some restart techniques to speed up the methods convergence.
76 - Jorge I. Poveda , Na Li 2019
We study novel robust zero-order algorithms with acceleration for the solution of real-time optimization problems. In particular, we propose a family of extremum seeking dynamics that can be universally modeled as singularly perturbed hybrid dynamical systems with restarting mechanisms. From this family of dynamics, we synthesize four fast algorithms for the solution of convex, strongly convex, constrained, and unconstrained optimization problems. In each case, we establish robust semi-global practical asymptotic or exponential stability results, and we show how to obtain well-posed discretized algorithms that retain the main properties of the original dynamics. Given that existing averaging theorems for singularly perturbed hybrid systems are not directly applicable to our setting, we derive a new averaging theorem that relaxes some of the assumptions made in the literature, allowing us to make a clear link between the KL bounds that characterize the rates of convergence of the hybrid dynamics and their average dynamics. We also show that our results are applicable to non-hybrid algorithms, thus providing a general framework for accelerated dynamics based on averaging theory. We present different numerical examples to illustrate our results.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا