ﻻ يوجد ملخص باللغة العربية
We propose a new primal-dual homotopy smoothing algorithm for a linearly constrained convex program, where neither the primal nor the dual function has to be smooth or strongly convex. The best known iteration complexity solving such a non-smooth problem is $mathcal{O}(varepsilon^{-1})$. In this paper, we show that by leveraging a local error bound condition on the dual function, the proposed algorithm can achieve a better primal convergence time of $mathcal{O}left(varepsilon^{-2/(2+beta)}log_2(varepsilon^{-1})right)$, where $betain(0,1]$ is a local error bound parameter. As an example application of the general algorithm, we show that the distributed geometric median problem, which can be formulated as a constrained convex program, has its dual function non-smooth but satisfying the aforementioned local error bound condition with $beta=1/2$, therefore enjoying a convergence time of $mathcal{O}left(varepsilon^{-4/5}log_2(varepsilon^{-1})right)$. This result improves upon the $mathcal{O}(varepsilon^{-1})$ convergence time bound achieved by existing distributed optimization algorithms. Simulation experiments also demonstrate the performance of our proposed algorithm.
We study constrained stochastic programs where the decision vector at each time slot cannot be chosen freely but is tied to the realization of an underlying random state vector. The goal is to minimize a general objective function subject to linear c
In this paper we propose a primal-dual homotopy method for $ell_1$-minimization problems with infinity norm constraints in the context of sparse reconstruction. The natural homotopy parameter is the value of the bound for the constraints and we show
Stochastic gradient methods (SGMs) have been widely used for solving stochastic optimization problems. A majority of existing works assume no constraints or easy-to-project constraints. In this paper, we consider convex stochastic optimization proble
In this work, we introduce ADAPD, $textbf{A}$ $textbf{D}$ecentr$textbf{A}$lized $textbf{P}$rimal-$textbf{D}$ual algorithmic framework for solving non-convex and smooth consensus optimization problems over a network of distributed agents. ADAPD makes
This paper studies the distributed optimization problem where the objective functions might be nondifferentiable and subject to heterogeneous set constraints. Unlike existing subgradient methods, we focus on the case when the exact subgradients of th