Do you want to publish a course? Click here

Adaptive Uncertainty-Weighted ADMM for Distributed Optimization

179   0   0.0 ( 0 )
 Added by Jianping Ye
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

We present AUQ-ADMM, an adaptive uncertainty-weighted consensus ADMM method for solving large-scale convex optimization problems in a distributed manner. Our key contribution is a novel adaptive weighting scheme that empirically increases the progress made by consensus ADMM scheme and is attractive when using a large number of subproblems. The weights are related to the uncertainty associated with the solutions of each subproblem, and are efficiently computed using low-rank approximations. We show AUQ-ADMM provably converges and demonstrate its effectiveness on a series of machine learning applications, including elastic net regression, multinomial logistic regression, and support vector machines. We provide an implementation based on the PyTorch package.



rate research

Read More

72 - Jongho Park 2020
Based on an observation that additive Schwarz methods for general convex optimization can be interpreted as gradient methods, we propose an acceleration scheme for additive Schwarz methods. Adopting acceleration techniques developed for gradient methods such as momentum and adaptive restarting, the convergence rate of additive Schwarz methods is greatly improved. The proposed acceleration scheme does not require any a priori information on the levels of smoothness and sharpness of a target energy functional, so that it can be applied to various convex optimization problems. Numerical results for linear elliptic problems, nonlinear elliptic problems, nonsmooth problems, and nonsharp problems are provided to highlight the superiority and the broad applicability of the proposed scheme.
The alternating direction method of multipliers (ADMM) is a popular approach for solving optimization problems that are potentially non-smooth and with hard constraints. It has been applied to various computer graphics applications, including physical simulation, geometry processing, and image processing. However, ADMM can take a long time to converge to a solution of high accuracy. Moreover, many computer graphics tasks involve non-convex optimization, and there is often no convergence guarantee for ADMM on such problems since it was originally designed for convex optimization. In this paper, we propose a method to speed up ADMM using Anderson acceleration, an established technique for accelerating fixed-point iterations. We show that in the general case, ADMM is a fixed-point iteration of the second primal variable and the dual variable, and Anderson acceleration can be directly applied. Additionally, when the problem has a separable target function and satisfies certain conditions, ADMM becomes a fixed-point iteration of only one variable, which further reduces the computational overhead of Anderson acceleration. Moreover, we analyze a particular non-convex problem structure that is common in computer graphics, and prove the convergence of ADMM on such problems under mild assumptions. We apply our acceleration technique on a variety of optimization problems in computer graphics, with notable improvement on their convergence speed.
Large scale, non-convex optimization problems arising in many complex networks such as the power system call for efficient and scalable distributed optimization algorithms. Existing distributed methods are usually iterative and require synchronization of all workers at each iteration, which is hard to scale and could result in the under-utilization of computation resources due to the heterogeneity of the subproblems. To address those limitations of synchronous schemes, this paper proposes an asynchronous distributed optimization method based on the Alternating Direction Method of Multipliers (ADMM) for non-convex optimization. The proposed method only requires local communications and allows each worker to perform local updates with information from a subset of but not all neighbors. We provide sufficient conditions on the problem formulation, the choice of algorithm parameter and network delay, and show that under those mild conditions, the proposed asynchronous ADMM method asymptotically converges to the KKT point of the non-convex problem. We validate the effectiveness of asynchronous ADMM by applying it to the Optimal Power Flow problem in multiple power systems and show that the convergence of the proposed asynchronous scheme could be faster than its synchronous counterpart in large-scale applications.
101 - Marie Maros , Joakim Jalden 2016
This paper shows the capability the alternating direction method of multipliers (ADMM) has to track, in a distributed manner, the optimal down-link beam-forming solution in a multiple input multiple output (MISO) multi-cell network given a dynamic channel. Each time the channel changes, ADMM is allowed to perform one algorithm iteration. In order to implement the proposed scheme, the base stations are not required to exchange channel state information (CSI), but will require to exchange interference values once. We show ADMMs tracking ability in terms of the algorithms Lyapunov function given that the primal and dual solutions to the convex optimization problem at hand can be understood as a continuous mapping from the problems parameters. We show that this holds true even considering that the problem looses strong convexity when it is made distributed. We then show that these requirements hold for the down-link, and consequently up-link, beam-forming case. Numerical examples corroborating the theoretical findings are also provided.
106 - Jianchao Bai , Deren Han , Hao Sun 2021
In this paper, we develop a symmetric accelerated stochastic Alternating Direction Method of Multipliers (SAS-ADMM) for solving separable convex optimization problems with linear constraints. The objective function is the sum of a possibly nonsmooth convex function and an average function of many smooth convex functions. Our proposed algorithm combines both ideas of ADMM and the techniques of accelerated stochastic gradient methods using variance reduction to solve the smooth subproblem. One main feature of SAS-ADMM {is} that its dual variable is symmetrically updated after each update of the separated primal variable, which would allow a more flexible and larger convergence region of the dual variable compared with that of standard deterministic or stochastic ADMM. This new stochastic optimization algorithm is shown to converge in expectation with $C{O}(1/T)$ convergence rate, where $T$ is the number of outer iterations. In addition, 3-block extensions of the algorithm and its variant of an accelerated stochastic augmented Lagrangian method are also discussed. Our preliminary numerical experiments indicate the proposed algorithm is very effective for solving separable optimization problems from big-data applications
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا