Do you want to publish a course? Click here

Global Convergence of Policy Gradient Primal-dual Methods for Risk-constrained LQRs

278   0   0.0 ( 0 )
 Added by Feiran Zhao
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

While the techniques in optimal control theory are often model-based, the policy optimization (PO) approach can directly optimize the performance metric of interest without explicit dynamical models, and is an essential approach for reinforcement learning problems. However, it usually leads to a non-convex optimization problem in most cases, where there is little theoretical understanding on its performance. In this paper, we focus on the risk-constrained Linear Quadratic Regulator (LQR) problem with noisy input via the PO approach, which results in a challenging non-convex problem. To this end, we first build on our earlier result that the optimal policy has an affine structure to show that the associated Lagrangian function is locally gradient dominated with respect to the policy, based on which we establish strong duality. Then, we design policy gradient primal-dual methods with global convergence guarantees to find an optimal policy-multiplier pair in both model-based and sample-based settings. Finally, we use samples of system trajectories in simulations to validate our policy gradient primal-dual methods.



rate research

Read More

In this work, we revisit a classical incremental implementation of the primal-descent dual-ascent gradient method used for the solution of equality constrained optimization problems. We provide a short proof that establishes the linear (exponential) convergence of the algorithm for smooth strongly-convex cost functions and study its relation to the non-incremental implementation. We also study the effect of the augmented Lagrangian penalty term on the performance of distributed optimization algorithms for the minimization of aggregate cost functions over multi-agent networks.
253 - Kui Zhu , Yutao Tang 2021
This paper studies the distributed optimization problem where the objective functions might be nondifferentiable and subject to heterogeneous set constraints. Unlike existing subgradient methods, we focus on the case when the exact subgradients of the local objective functions can not be accessed by the agents. To solve this problem, we propose a projected primal-dual dynamics using only the objective functions approximate subgradients. We first prove that the formulated optimization problem can only be solved with an approximate error depending upon the accuracy of the available subgradients. Then, we show the exact solvability of this optimization problem if the accumulated approximation error is not too large. After that, we also give a novel componentwise normalized variant to improve the transient behavior of the convergent sequence. The effectiveness of our algorithms is verified by a numerical example.
107 - Yonggui Yan , Yangyang Xu 2020
Stochastic gradient methods (SGMs) have been widely used for solving stochastic optimization problems. A majority of existing works assume no constraints or easy-to-project constraints. In this paper, we consider convex stochastic optimization problems with expectation constraints. For these problems, it is often extremely expensive to perform projection onto the feasible set. Several SGMs in the literature can be applied to solve the expectation-constrained stochastic problems. We propose a novel primal-dual type SGM based on the Lagrangian function. Different from existing methods, our method incorporates an adaptiveness technique to speed up convergence. At each iteration, our method inquires an unbiased stochastic subgradient of the Lagrangian function, and then it renews the primal variables by an adaptive-SGM update and the dual variables by a vanilla-SGM update. We show that the proposed method has a convergence rate of $O(1/sqrt{k})$ in terms of the objective error and the constraint violation. Although the convergence rate is the same as those of existing SGMs, we observe its significantly faster convergence than an existing non-adaptive primal-dual SGM and a primal SGM on solving the Neyman-Pearson classification and quadratically constrained quadratic programs. Furthermore, we modify the proposed method to solve convex-concave stochastic minimax problems, for which we perform adaptive-SGM updates to both primal and dual variables. A convergence rate of $O(1/sqrt{k})$ is also established to the modified method for solving minimax problems in terms of primal-dual gap.
The spectral bundle method proposed by Helmberg and Rendl is well established for solving large scale semidefinite programs (SDP) thanks to its low per iteration computational complexity and strong practical performance. In this paper, we revisit this classic method showing it achieves sublinear convergence rates in terms of both primal and dual SDPs under merely strong duality. Prior to this work, only limited dual guarantees were known. Moreover, we develop a novel variant, called the block spectral bundle method (Block-Spec), which not only enjoys the same convergence rate and low per iteration complexity, but also speeds up to linear convergence when the SDP admits strict complementarity. Numerically, we demonstrate the effectiveness of both methods, confirming our theoretical findings that the block spectral bundle method can substantially speed up convergence.
Small-scale Mixed-Integer Quadratic Programming (MIQP) problems often arise in embedded control and estimation applications. Driven by the need for algorithmic simplicity to target computing platforms with limited memory and computing resources, this paper proposes a few approaches to solving MIQPs, either to optimality or suboptimally. We specialize an existing Accelerated Dual Gradient Projection (GPAD) algorithm to effectively solve the Quadratic Programming (QP) relaxation that arise during Branch and Bound (B&B) and propose a generic framework to warm-start the binary variables which reduces the number of QP relaxations. Moreover, in order to find an integer feasible combination of the binary variables upfront, two heuristic approaches are presented: ($i$) without using B&B, and ($ii$) using B&B with a significantly reduced number of QP relaxations. Both heuristic approaches return an integer feasible solution that may be suboptimal but involve a much reduced computation effort. Such a feasible solution can be either implemented directly or used to set an initial upper bound on the optimal cost in B&B. Through different hybrid control and estimation examples involving binary decision variables, we show that the performance of the proposed methods, although very simple to code, is comparable to that of state-of-the-art MIQP solvers.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا