Do you want to publish a course? Click here

Robustness of accelerated first-order algorithms for strongly convex optimization problems

129   0   0.0 ( 0 )
 Added by Mihailo Jovanovic
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

We study the robustness of accelerated first-order algorithms to stochastic uncertainties in gradient evaluation. Specifically, for unconstrained, smooth, strongly convex optimization problems, we examine the mean-squared error in the optimization variable when the iterates are perturbed by additive white noise. This type of uncertainty may arise in situations where an approximation of the gradient is sought through measurements of a real system or in a distributed computation over a network. Even though the underlying dynamics of first-order algorithms for this class of problems are nonlinear, we establish upper bounds on the mean-squared deviation from the optimal solution that are tight up to constant factors. Our analysis quantifies fundamental trade-offs between noise amplification and convergence rates obtained via any acceleration scheme similar to Nesterovs or heavy-ball methods. To gain additional analytical insight, for strongly convex quadratic problems, we explicitly evaluate the steady-state variance of the optimization variable in terms of the eigenvalues of the Hessian of the objective function. We demonstrate that the entire spectrum of the Hessian, rather than just the extreme eigenvalues, influence robustness of noisy algorithms. We specialize this result to the problem of distributed averaging over undirected networks and examine the role of network size and topology on the robustness of noisy accelerated algorithms.



rate research

Read More

This paper considers the problem of designing accelerated gradient-based algorithms for optimization and saddle-point problems. The class of objective functions is defined by a generalized sector condition. This class of functions contains strongly convex functions with Lipschitz gradients but also non-convex functions, which allows not only to address optimization problems but also saddle-point problems. The proposed design procedure relies on a suitable class of Lyapunov functions and on convex semi-definite programming. The proposed synthesis allows the design of algorithms that reach the performance of state-of-the-art accelerated gradient methods and beyond.
In this study, we present a general framework of outer approximation algorithms to solve convex vector optimization problems, in which the Pascoletti-Serafini (PS) scalarization is solved iteratively. This scalarization finds the minimum distance from a reference point, which is usually taken as a vertex of the current outer approximation, to the upper image through a given direction. We propose efficient methods to select the parameters (the reference point and direction vector) of the PS scalarization and analyze the effects of these on the overall performance of the algorithm. Different from the existing vertex selection rules from the literature, the proposed methods do not require solving additional single-objective optimization problems. Using some test problems, we conduct an extensive computational study where three different measures are set as the stopping criteria: the approximation error, the runtime, and the cardinality of solution set. We observe that the proposed variants have satisfactory results especially in terms of runtime compared to the existing variants from the literature.
We study gradient-based optimization methods obtained by direct Runge-Kutta discretization of the ordinary differential equation (ODE) describing the movement of a heavy-ball under constant friction coefficient. When the function is high order smooth and strongly convex, we show that directly simulating the ODE with known numerical integrators achieve acceleration in a nontrivial neighborhood of the optimal solution. In particular, the neighborhood can grow larger as the condition number of the function increases. Furthermore, our results also hold for nonconvex but quasi-strongly convex objectives. We provide numerical experiments that verify the theoretical rates predicted by our results.
We propose an accelerated meta-algorithm, which allows to obtain accelerated methods for convex unconstrained minimization in different settings. As an application of the general scheme we propose nearly optimal methods for minimizing smooth functions with Lipschitz derivatives of an arbitrary order, as well as for smooth minimax optimization problems. The proposed meta-algorithm is more general than the ones in the literature and allows to obtain better convergence rates and practical performance in several settings.
94 - Hao Luo 2021
This work introduces a second-order differential inclusion for unconstrained convex optimization. In continuous level, solution existence in proper sense is obtained and exponential decay of a novel Lyapunov function along with the solution trajectory is derived as well. Then in discrete level, based on numerical discretizations of the continuous differential inclusion, both an inexact accelerated proximal point algorithm and an inexact accelerated proximal gradient method are proposed, and some new convergence rates are established via a discrete Lyapunov function.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا