ترغب بنشر مسار تعليمي؟ اضغط هنا

On starting and stopping criteria for nested primal-dual iterations

285   0   0.0 ( 0 )
 نشر من قبل Ignace Loris
 تاريخ النشر 2018
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

The importance of an adequate inner loop starting point (as opposed to a sufficient inner loop stopping rule) is discussed in the context of a numerical optimization algorithm consisting of nested primal-dual proximal-gradient iterations. While the number of inner iterations is fixed in advance, convergence of the whole algorithm is still guaranteed by virtue of a warm-start strategy for the inner loop, showing that inner loop starting rules can be just as effective as stopping rules for guaranteeing convergence. The algorithm itself is applicable to the numerical solution of convex optimization problems defined by the sum of a differentiable term and two possibly non-differentiable terms. One of the latter terms should take the form of the composition of a linear map and a proximable function, while the differentiable term needs an accessible gradient. The algorithm reduces to the classical proximal gradient algorithm in certain special cases and it also generalizes other existing algorithms. In addition, under some conditions of strong convexity, we show a linear rate of convergence.



قيم البحث

اقرأ أيضاً

253 - Kui Zhu , Yutao Tang 2021
This paper studies the distributed optimization problem where the objective functions might be nondifferentiable and subject to heterogeneous set constraints. Unlike existing subgradient methods, we focus on the case when the exact subgradients of th e local objective functions can not be accessed by the agents. To solve this problem, we propose a projected primal-dual dynamics using only the objective functions approximate subgradients. We first prove that the formulated optimization problem can only be solved with an approximate error depending upon the accuracy of the available subgradients. Then, we show the exact solvability of this optimization problem if the accumulated approximation error is not too large. After that, we also give a novel componentwise normalized variant to improve the transient behavior of the convergent sequence. The effectiveness of our algorithms is verified by a numerical example.
We describe convergence acceleration schemes for multistep optimization algorithms. The extrapolated solution is written as a nonlinear average of the iterates produced by the original optimization method. Our analysis does not need the underlying fi xed-point operator to be symmetric, hence handles e.g. algorithms with momentum terms such as Nesterovs accelerated method, or primal-dual methods. The weights are computed via a simple linear system and we analyze performance in both online and offline modes. We use Crouzeixs conjecture to show that acceleration performance is controlled by the solution of a Chebyshev problem on the numerical range of a non-symmetric operator modeling the behavior of iterates near the optimum. Numerical experiments are detailed on logistic regression problems.
We consider a distributed optimization problem over a network of agents aiming to minimize a global objective function that is the sum of local convex and composite cost functions. To this end, we propose a distributed Chebyshev-accelerated primal-du al algorithm to achieve faster ergodic convergence rates. In standard distributed primal-dual algorithms, the speed of convergence towards a global optimum (i.e., a saddle point in the corresponding Lagrangian function) is directly influenced by the eigenvalues of the Laplacian matrix representing the communication graph. In this paper, we use Chebyshev matrix polynomials to generate gossip matrices whose spectral properties result in faster convergence speeds, while allowing for a fully distributed implementation. As a result, the proposed algorithm requires fewer gradient updates at the cost of additional rounds of communications between agents. We illustrate the performance of the proposed algorithm in a distributed signal recovery problem. Our simulations show how the use of Chebyshev matrix polynomials can be used to improve the convergence speed of a primal-dual algorithm over communication networks, especially in networks with poor spectral properties, by trading local computation by communication rounds.
109 - Hao Luo 2021
We introduce a novel primal-dual flow for affine constrained convex optimization problem. As a modification of the standard saddle-point system, our primal-dual flow is proved to possesses the exponential decay property, in terms of a tailored Lyapun ov function. Then a class of primal-dual methods for the original optimization problem are obtained from numerical discretizations of the continuous flow, and with a unified discrete Lyapunov function, nonergodic convergence rates are established. Among those algorithms, we can recover the (linearized) augmented Lagrangian method and the quadratic penalty method with continuation technique. Also, new methods with a special inner problem, that is a linear symmetric positive definite system or a nonlinear equation which may be solved efficiently via the semi-smooth Newton method, have been proposed as well. Especially, numerical tests on the linearly constrained $l_1$-$l_2$ minimization show that our method outperforms the accelerated linearized Bregman method.
This paper investigates accelerating the convergence of distributed optimization algorithms on non-convex problems. We propose a distributed primal-dual stochastic gradient descent~(SGD) equipped with powerball method to accelerate. We show that the proposed algorithm achieves the linear speedup convergence rate $mathcal{O}(1/sqrt{nT})$ for general smooth (possibly non-convex) cost functions. We demonstrate the efficiency of the algorithm through numerical experiments by training two-layer fully connected neural networks and convolutional neural networks on the MNIST dataset to compare with state-of-the-art distributed SGD algorithms and centralized SGD algorithms.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا