Do you want to publish a course? Click here

Accelerated meta-algorithm for convex optimization

177   0   0.0 ( 0 )
 Added by Darina Dvinskikh
 Publication date 2020
  fields
and research's language is English




Ask ChatGPT about the research

We propose an accelerated meta-algorithm, which allows to obtain accelerated methods for convex unconstrained minimization in different settings. As an application of the general scheme we propose nearly optimal methods for minimizing smooth functions with Lipschitz derivatives of an arbitrary order, as well as for smooth minimax optimization problems. The proposed meta-algorithm is more general than the ones in the literature and allows to obtain better convergence rates and practical performance in several settings.



rate research

Read More

This paper investigates accelerating the convergence of distributed optimization algorithms on non-convex problems. We propose a distributed primal-dual stochastic gradient descent~(SGD) equipped with powerball method to accelerate. We show that the proposed algorithm achieves the linear speedup convergence rate $mathcal{O}(1/sqrt{nT})$ for general smooth (possibly non-convex) cost functions. We demonstrate the efficiency of the algorithm through numerical experiments by training two-layer fully connected neural networks and convolutional neural networks on the MNIST dataset to compare with state-of-the-art distributed SGD algorithms and centralized SGD algorithms.
94 - Hao Luo 2021
This work introduces a second-order differential inclusion for unconstrained convex optimization. In continuous level, solution existence in proper sense is obtained and exponential decay of a novel Lyapunov function along with the solution trajectory is derived as well. Then in discrete level, based on numerical discretizations of the continuous differential inclusion, both an inexact accelerated proximal point algorithm and an inexact accelerated proximal gradient method are proposed, and some new convergence rates are established via a discrete Lyapunov function.
430 - Yuwen Chen 2020
Derivative-free optimization (DFO) has recently gained a lot of momentum in machine learning, spawning interest in the community to design faster methods for problems where gradients are not accessible. While some attention has been given to the concept of acceleration in the DFO literature, existing stochastic algorithms for objective functions with a finite-sum structure have not been shown theoretically to achieve an accelerated rate of convergence. Algorithms that use acceleration in such a setting are prone to instabilities, making it difficult to reach convergence. In this work, we exploit the finite-sum structure of the objective in order to design a variance-reduced DFO algorithm that provably yields acceleration. We prove rates of convergence for both smooth convex and strongly-convex finite-sum objective functions. Finally, we validate our theoretical results empirically on several tasks and datasets.
An inexact accelerated stochastic Alternating Direction Method of Multipliers (AS-ADMM) scheme is developed for solving structured separable convex optimization problems with linear constraints. The objective function is the sum of a possibly nonsmooth convex function and a smooth function which is an average of many component convex functions. Problems having this structure often arise in machine learning and data mining applications. AS-ADMM combines the ideas of both ADMM and the stochastic gradient methods using variance reduction techniques. One of the ADMM subproblems employs a linearization technique while a similar linearization could be introduced for the other subproblem. For a specified choice of the algorithm parameters, it is shown that the objective error and the constraint violation are $mathcal{O}(1/k)$ relative to the number of outer iterations $k$. Under a strong convexity assumption, the expected iterate error converges to zero linearly. A linearized variant of AS-ADMM and incremental sampling strategies are also discussed. Numerical experiments with both stochastic and deterministic ADMM algorithms show that AS-ADMM can be particularly effective for structured optimization arising in big data applications.
We suggest a new greedy strategy for convex optimization in Banach spaces and prove its convergent rates under a suitable behavior of the modulus of uniform smoothness of the objective function.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا