ﻻ يوجد ملخص باللغة العربية
In this paper, we investigate the continuous time partial primal-dual gradient dynamics (P-PDGD) for solving convex optimization problems with the form $ minlimits_{xin X,yinOmega} f({x})+h(y), textit{s.t.} A{x}+By=C $, where $ f({x}) $ is strongly convex and smooth, but $ h(y) $ is strongly convex and non-smooth. Affine equality and set constraints are included. We prove the exponential stability of P-PDGD, and bounds on decaying rates are provided. Moreover, it is also shown that the decaying rates can be regulated by setting the stepsize.
We study the problem of detecting infeasibility of large-scale linear programming problems using the primal-dual hybrid gradient method (PDHG) of Chambolle and Pock (2011). The literature on PDHG has mostly focused on settings where the problem at ha
While the techniques in optimal control theory are often model-based, the policy optimization (PO) approach can directly optimize the performance metric of interest without explicit dynamical models, and is an essential approach for reinforcement lea
This paper investigates the problem of regulating in real time a linear dynamical system to the solution trajectory of a time-varying constrained convex optimization problem. The proposed feedback controller is based on an adaptation of the saddle-fl
In this work, we revisit a classical incremental implementation of the primal-descent dual-ascent gradient method used for the solution of equality constrained optimization problems. We provide a short proof that establishes the linear (exponential)
Stochastic gradient methods (SGMs) have been widely used for solving stochastic optimization problems. A majority of existing works assume no constraints or easy-to-project constraints. In this paper, we consider convex stochastic optimization proble