ترغب بنشر مسار تعليمي؟ اضغط هنا

An Accelerated Composite Gradient Method for Large-scale Composite Objective Problems

65   0   0.0 ( 0 )
 نشر من قبل Mihai Florea
 تاريخ النشر 2016
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

We introduce a framework, which we denote as the augmented estimate sequence, for deriving fast algorithms with provable convergence guarantees. We use this framework to construct a new first-order scheme, the Accelerated Composite Gradient Method (ACGM), for large-scale problems with composite objective structure. ACGM surpasses the state-of-the-art methods for this problem class in terms of provable convergence rate, both in the strongly and non-strongly convex cases, and is endowed with an efficient step size search procedure. We support the effectiveness of our new method with simulation results.



قيم البحث

اقرأ أيضاً

154 - A.S. Lewis , S.J. Wright 2015
We consider minimization of functions that are compositions of convex or prox-regular functions (possibly extended-valued) with smooth vector functions. A wide variety of important optimization problems fall into this framework. We describe an algori thmic framework based on a subproblem constructed from a linearized approximation to the objective and a regularization term. Properties of local solutions of this subproblem underlie both a global convergence result and an identification property of the active manifold containing the solution of the original problem. Preliminary computational results on both convex and nonconvex examples are promising.
166 - Yangyang Xu 2020
Stochastic gradient methods (SGMs) have been extensively used for solving stochastic problems or large-scale machine learning problems. Recent works employ various techniques to improve the convergence rate of SGMs for both convex and nonconvex cases . Most of them require a large number of samples in some or all iterations of the improved SGMs. In this paper, we propose a new SGM, named PStorm, for solving nonconvex nonsmooth stochastic problems. With a momentum-based variance reduction technique, PStorm can achieve the optimal complexity result $O(varepsilon^{-3})$ to produce a stochastic $varepsilon$-stationary solution, if a mean-squared smoothness condition holds and $Theta(varepsilon^{-1})$ samples are available for the initial update. Different from existing optimal methods, PStorm can still achieve a near-optimal complexity result $tilde{O}(varepsilon^{-3})$ by using only one or $O(1)$ samples in every update. With this property, PStorm can be applied to online learning problems that favor real-time decisions based on one or $O(1)$ new observations. In addition, for large-scale machine learning problems, PStorm can generalize better by small-batch training than other optimal methods that require large-batch training and the vanilla SGM, as we demonstrate on training a sparse fully-connected neural network and a sparse convolutional neural network.
Many modern statistical estimation problems are defined by three major components: a statistical model that postulates the dependence of an output variable on the input features; a loss function measuring the error between the observed output and the model predicted output; and a regularizer that controls the overfitting and/or variable selection in the model. We study the sampling version of this generic statistical estimation problem where the model parameters are estimated by empirical risk minimization, which involves the minimization of the empirical average of the loss function at the data points weighted by the model regularizer. In our setup we allow all three component functions discussed above to be of the difference-of-convex (dc) type and illustrate them with a host of commonly used examples, including those in continuous piecewise affine regression and in deep learning (where the activation functions are piecewise affine). We describe a nonmonotone majorization-minimization (MM) algorithm for solving the unified nonconvex, nondifferentiable optimization problem which is formulated as a specially structured composite dc program of the pointwise max type, and present convergence results to a directional stationary solution. An efficient semismooth Newton method is proposed to solve the dual of the MM subproblems. Numerical results are presented to demonstrate the effectiveness of the proposed algorithm and the superiority of continuous piecewise affine regression over the standard linear model.
A previous authors paper introduces an accelerated composite gradient (ACG) variant, namely AC-ACG, for solving nonconvex smooth composite optimization (N-SCO) problems. In contrast to other ACG variants, AC-ACG estimates the local upper curvature of the N-SCO problem by using the average of the observed upper-Lipschitz curvatures obtained during the previous iterations, and uses this estimation and two composite resolvent evaluations to compute the next iterate. This paper presents an alternative FISTA-type ACG variant, namely AC-FISTA, which has the following additional features: i) it performs an average of one composite resolvent evaluation per iteration; and ii) it estimates the local upper curvature by using the average of the previously observed upper (instead of upper-Lipschitz) curvatures. These two properties acting together yield a practical AC-FISTA variant which substantially outperforms earlier ACG variants, including the AC-ACG variants discussed in the aforementioned authors paper.
We propose a semi-proximal augmented Lagrangian based decomposition method for convex composite quadratic conic programming problems with primal block angular structures. Using our algorithmic framework, we are able to naturally derive several well k nown augmented Lagrangian based decomposition methods for stochastic programming such as the diagonal quadratic approximation method of Mulvey and Ruszczy{n}ski. Moreover, we are able to derive novel enhancements and generalizations of these well known methods. We also propose a semi-proximal symmetric Gauss-Seidel based alternating direction method of multipliers for solving the corresponding dual problem. Numerical results show that our algorithms can perform well even for very large instances of primal block angular convex QP problems. For example, one instance with more than $300,000$ linear constraints and $12,500,000$ nonnegative variables is solved in less than a minute whereas Gurobi took more than 3 hours, and another instance {tt qp-gridgen1} with more than $331,000$ linear constraints and $986,000$ nonnegative variables is solved in about 5 minutes whereas Gurobi took more than 35 minutes.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا