Do you want to publish a course? Click here

Adaptive Primal-Dual Stochastic Gradient Method for Expectation-constrained Convex Stochastic Programs

108   0   0.0 ( 0 )
 Added by Yonggui Yan
 Publication date 2020
  fields
and research's language is English




Ask ChatGPT about the research

Stochastic gradient methods (SGMs) have been widely used for solving stochastic optimization problems. A majority of existing works assume no constraints or easy-to-project constraints. In this paper, we consider convex stochastic optimization problems with expectation constraints. For these problems, it is often extremely expensive to perform projection onto the feasible set. Several SGMs in the literature can be applied to solve the expectation-constrained stochastic problems. We propose a novel primal-dual type SGM based on the Lagrangian function. Different from existing methods, our method incorporates an adaptiveness technique to speed up convergence. At each iteration, our method inquires an unbiased stochastic subgradient of the Lagrangian function, and then it renews the primal variables by an adaptive-SGM update and the dual variables by a vanilla-SGM update. We show that the proposed method has a convergence rate of $O(1/sqrt{k})$ in terms of the objective error and the constraint violation. Although the convergence rate is the same as those of existing SGMs, we observe its significantly faster convergence than an existing non-adaptive primal-dual SGM and a primal SGM on solving the Neyman-Pearson classification and quadratically constrained quadratic programs. Furthermore, we modify the proposed method to solve convex-concave stochastic minimax problems, for which we perform adaptive-SGM updates to both primal and dual variables. A convergence rate of $O(1/sqrt{k})$ is also established to the modified method for solving minimax problems in terms of primal-dual gap.



rate research

Read More

We study constrained stochastic programs where the decision vector at each time slot cannot be chosen freely but is tied to the realization of an underlying random state vector. The goal is to minimize a general objective function subject to linear constraints. A typical scenario where such programs appear is opportunistic scheduling over a network of time-varying channels, where the random state vector is the channel state observed, and the control vector is the transmission decision which depends on the current channel state. We consider a primal-dual type Frank-Wolfe algorithm that has a low complexity update during each slot and that learns to make efficient decisions without prior knowledge of the probability distribution of the random state vector. We establish convergence time guarantees for the case of both convex and non-convex objective functions. We also emphasize application of the algorithm to non-convex opportunistic scheduling and distributed non-convex stochastic optimization over a connected graph.
This paper considers a general convex constrained problem setting where functions are not assumed to be differentiable nor Lipschitz continuous. Our motivation is in finding a simple first-order method for solving a wide range of convex optimization problems with minimal requirements. We study the method of weighted dual averages (Nesterov, 2009) in this setting and prove that it is an optimal method.
175 - Shengjie Xu 2021
The augmented Lagrangian method (ALM) is a fundamental tool for solving the canonical convex minimization problem with linear constraints, and efficiently and easily how to implement the original ALM is affirmatively significant. Recently, He and Yuan have proposed a balanced version of ALM [B.S. He and X.M. Yuan, arXiv:2108.08554, 2021], which reshapes the original ALM by balancing its subproblems and makes the benchmark ALM easier to implement without any additional condition. In practice, the balanced ALM updates the new iterate by a primal-dual order. In this note, exploiting the variational inequality structure of the most recent balanced ALM, we propose a dual-primal version of the balanced ALM for linearly constrained convex minimization problems. The novel proposed method generates the new iterate by a dual-primal order and enjoys the same computational difficulty with the original primal-dual balanced ALM. Furthermore, under the lens of the proximal point algorithm, we conduct the convergence analysis of the novel introduced method in the context of variational inequalities. Numerical tests on the basic pursuit problem demonstrate that the introduced method enjoys the same high efficiency with the prototype balanced ALM.
In this paper we propose several adaptive gradient methods for stochastic optimization. Unlike AdaGrad-type of methods, our algorithms are based on Armijo-type line search and they simultaneously adapt to the unknown Lipschitz constant of the gradient and variance of the stochastic approximation for the gradient. We consider an accelerated and non-accelerated gradient descent for convex problems and gradient descent for non-convex problems. In the experiments we demonstrate superiority of our methods to existing adaptive methods, e.g. AdaGrad and Adam.
235 - Liwei Zhang , Yule Zhang , Jia Wu 2021
This paper considers the problem of minimizing a convex expectation function with a set of inequality convex expectation constraints. We present a computable stochastic approximation type algorithm, namely the stochastic linearized proximal method of multipliers, to solve this convex stochastic optimization problem. This algorithm can be roughly viewed as a hybrid of stochastic approximation and the traditional proximal method of multipliers. Under mild conditions, we show that this algorithm exhibits $O(K^{-1/2})$ expected convergence rates for both objective reduction and constraint violation if parameters in the algorithm are properly chosen, where $K$ denotes the number of iterations. Moreover, we show that, with high probability, the algorithm has $O(log(K)K^{-1/2})$ constraint violation bound and $O(log^{3/2}(K)K^{-1/2})$ objective bound. Some preliminary numerical results demonstrate the performance of the proposed algorithm.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا