ﻻ يوجد ملخص باللغة العربية
Conic optimization is the minimization of a differentiable convex objective function subject to conic constraints. We propose a novel primal-dual first-order method for conic optimization, named proportional-integral projected gradient method (PIPG). PIPG ensures that both the primal-dual gap and the constraint violation converge to zero at the rate of (O(1/k)), where (k) is the number of iterations. If the objective function is strongly convex, PIPG improves the convergence rate of the primal-dual gap to (O(1/k^2)). Further, unlike any existing first-order methods, PIPG also improves the convergence rate of the constraint violation to (O(1/k^3)). We demonstrate the application of PIPG in constrained optimal control problems.
A constrained optimization problem is primal infeasible if its constraints cannot be satisfied, and dual infeasible if the constraints of its dual problem cannot be satisfied. We propose a novel iterative method, named proportional-integral projected
This paper studies the distributed optimization problem where the objective functions might be nondifferentiable and subject to heterogeneous set constraints. Unlike existing subgradient methods, we focus on the case when the exact subgradients of th
In this paper, we propose a relaxation to the stochastic ruler method originally described by Yan and Mukai in 1992 for asymptotically determining the global optima of discrete simulation optimization problems. We show that our proposed variant of th
Distributed optimization is concerned with using local computation and communication to realize a global aim of optimizing the sum of local objective functions. It has gained wide attention for a variety of applications in networked systems. This pap
This paper considers the problem of designing accelerated gradient-based algorithms for optimization and saddle-point problems. The class of objective functions is defined by a generalized sector condition. This class of functions contains strongly c