We propose a new randomized algorithm for solving convex optimization problems that have a large number of constraints (with high probability). Existing methods like interior-point or Newton-type algorithms are hard to apply to such problems because they have expensive computation and storage requirements for Hessians and matrix
We study decentralized non-convex finite-sum minimization problems described over a network of nodes, where each node possesses a local batch of data samples. In this context, we analyze a single-timescale randomized incremental gradient method, called GT-SAGA. GT-SAGA is computationally efficient as it evaluates one component gradient per node per iteration and achieves provably fast and robust performance by leveraging node-level variance reduction and network-level gradient tracking. For general smooth non-convex problems, we show the almost sure and mean-squared convergence of GT-SAGA to a first-order stationary point and further describe regimes of practical significance where it outperforms the existing approaches and achieves a network topology-independent iteration complexity respectively. When the global function satisfies the Polyak-Lojaciewisz condition, we show that GT-SAGA exhibits linear convergence to an optimal solution in expectation and describe regimes of practical interest where the performance is network topology-independent and improves upon the existing methods. Numerical experiments are included to highlight the main convergence aspects of GT-SAGA in non-convex settings.
This paper considers a general convex constrained problem setting where functions are not assumed to be differentiable nor Lipschitz continuous. Our motivation is in finding a simple first-order method for solving a wide range of convex optimization problems with minimal requirements. We study the method of weighted dual averages (Nesterov, 2009) in this setting and prove that it is an optimal method.
This paper presents a novel scalable framework to solve the optimization of a nonlinear system with differential algebraic equation (DAE) constraints that enforce the asymptotic stability of the underlying dynamic model with respect to certain disturbances. Existing solution approaches to analogous DAE-constrained problems are based on discretization of DAE system into a large set of nonlinear algebraic equations representing the time-marching schemes. These approaches are not scalable to large size models. The proposed framework, based on LaSalles invariance principle, uses convex Lyapunov functions to develop a novel stability certificate which consists of a limited number of algebraic constraints. We develop specific algorithms for two major types of nonlinearities, namely Lure, and quasi-polynomial systems. Quadratic and convex-sum-of-square Lyapunov functions are constructed for the Lure-type and quasi-polynomial systems respectively. A numerical experiment is performed on a 3-generator power network to obtain a solution for transient-stability-constrained optimal power flow.
This paper considers a class of constrained convex stochastic composite optimization problems whose objective function is given by the summation of a differentiable convex component, together with a nonsmooth but convex component. The nonsmooth component has an explicit max structure that may not easy to compute its proximal mapping. In order to solve these problems, we propose a mini-batch stochastic Nesterovs smoothing (MSNS) method. Convergence and the optimal iteration complexity of the method are established. Numerical results are provided to illustrate the efficiency of the proposed MSNS method for a support vector machine (SVM) model.
We introduce a novel primal-dual flow for affine constrained convex optimization problem. As a modification of the standard saddle-point system, our primal-dual flow is proved to possesses the exponential decay property, in terms of a tailored Lyapunov function. Then a class of primal-dual methods for the original optimization problem are obtained from numerical discretizations of the continuous flow, and with a unified discrete Lyapunov function, nonergodic convergence rates are established. Among those algorithms, we can recover the (linearized) augmented Lagrangian method and the quadratic penalty method with continuation technique. Also, new methods with a special inner problem, that is a linear symmetric positive definite system or a nonlinear equation which may be solved efficiently via the semi-smooth Newton method, have been proposed as well. Especially, numerical tests on the linearly constrained $l_1$-$l_2$ minimization show that our method outperforms the accelerated linearized Bregman method.
Bo Wei
,William B. Haskell
,Sixiang Zhao
.
(2020)
.
"A Randomized Nonlinear Rescaling Method in Large-Scale Constrained Convex Optimization"
.
William Haskell
هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا