ﻻ يوجد ملخص باللغة العربية
In this article, we consider an accelerated first-order method, namely, the method of similar triangles, which is optimal in the class of convex (strongly convex) problems with a gradient. The paper considers a model of additive noise in a gradient and a Euclidean prox-structure. Convergence estimates are obtained in the case of strong convexity and its absence, and a stopping criterion is proposed for not strongly convex problems.
The optimized gradient method (OGM) provides a factor-$sqrt{2}$ speedup upon Nesterovs celebrated accelerated gradient method in the convex (but non-strongly convex) setup. However, this improved acceleration mechanism has not been well understood; p
We study distributed stochastic gradient (D-SG) method and its accelerated variant (D-ASG) for solving decentralized strongly convex stochastic optimization problems where the objective function is distributed over several computational units, lying
Based on an observation that additive Schwarz methods for general convex optimization can be interpreted as gradient methods, we propose an acceleration scheme for additive Schwarz methods. Adopting acceleration techniques developed for gradient meth
This paper gives a unified convergence analysis of additive Schwarz methods for general convex optimization problems. Resembling to the fact that additive Schwarz methods for linear problems are preconditioned Richardson methods, we prove that additi
We study a new aggregation operator for gradients coming from a mini-batch for stochastic gradient (SG) methods that allows a significant speed-up in the case of sparse optimization problems. We call this method AdaBatch and it only requires a few li