Do you want to publish a course? Click here

Convergence for nonconvex ADMM, with applications to CT imaging

154   0   0.0 ( 0 )
 Added by Rina Barber
 Publication date 2020
  fields
and research's language is English




Ask ChatGPT about the research

The alternating direction method of multipliers (ADMM) algorithm is a powerful and flexible tool for complex optimization problems of the form $min{f(x)+g(y) : Ax+By=c}$. ADMM exhibits robust empirical performance across a range of challenging settings including nonsmoothness and nonconvexity of the objective functions $f$ and $g$, and provides a simple and natural approach to the inverse problem of image reconstruction for computed tomography (CT) imaging. From the theoretical point of view, existing results for convergence in the nonconvex setting generally assume smoothness in at least one of the component functions in the objective. In this work, our new theoretical results provide convergence guarantees under a restricted strong convexity assumption without requiring smoothness or differentiability, while still allowing differentiable terms to be treated approximately if needed. We validate these theoretical results empirically, with a simulated example where both $f$ and $g$ are nondifferentiable -- and thus outside the scope of existing theory -- as well as a simulated CT image reconstruction problem.



rate research

Read More

Distributed optimization is often widely attempted and innovated as an attractive and preferred methodology to solve large-scale problems effectively in a localized and coordinated manner. Thus, it is noteworthy that the methodology of distributed model predictive control (DMPC) has become a promising approach to achieve effective outcomes, e.g., in decision-making tasks for multi-agent systems. However, the typical deployment of such distributed MPC frameworks would lead to the involvement of nonlinear processes with a large number of nonconvex constraints. To address this important problem, the development and innovation of a hierarchical three-block alternating direction method of multipliers (ADMM) approach is presented in this work to solve this nonconvex cooperative DMPC problem in multi-agent systems. Here firstly, an additional slack variable is introduced to transform the original large-scale nonconvex optimization problem. Then, a hierarchical ADMM approach, which contains outer loop iteration by the augmented Lagrangian method (ALM) and inner loop iteration by three-block semi-proximal ADMM, is utilized to solve the resulting transformed nonconvex optimization problem. Additionally, it is analytically shown and established that the requisite desired stationary point exists for convergence in the algorithm. Finally, an approximate optimization stage with a barrier method is then applied to further significantly improve the computational efficiency, yielding the final improved hierarchical ADMM. The effectiveness of the proposed method in terms of attained performance and computational efficiency is demonstrated on a cooperative DMPC problem of decision-making process for multiple unmanned aerial vehicles (UAVs).
The alternating direction multiplier method (ADMM) is widely used in computer graphics for solving optimization problems that can be nonsmooth and nonconvex. It converges quickly to an approximate solution, but can take a long time to converge to a solution of high-accuracy. Previously, Anderson acceleration has been applied to ADMM, by treating it as a fixed-point iteration for the concatenation of the dual variables and a subset of the primal variables. In this paper, we note that the equivalence between ADMM and Douglas-Rachford splitting reveals that ADMM is in fact a fixed-point iteration in a lower-dimensional space. By applying Anderson acceleration to such lower-dimensional fixed-point iteration, we obtain a more effective approach for accelerating ADMM. We analyze the convergence of the proposed acceleration method on nonconvex problems, and verify its effectiveness on a variety of computer graphics problems including geometry processing and physical simulation.
149 - Hanbaek Lyu 2020
Block coordinate descent (BCD), also known as nonlinear Gauss-Seidel, is a simple iterative algorithm for nonconvex optimization that sequentially minimizes the objective function in each block coordinate while the other coordinates are held fixed. We propose a version of BCD that is guaranteed to converge to the stationary points of block-wise convex and differentiable objective functions under constraints. Furthermore, we obtain a best-case rate of convergence of order $log n/sqrt{n}$, where $n$ denotes the number of iterations. A key idea is to restrict the parameter search within a diminishing radius to promote stability of iterates, and then to show that such auxiliary constraints vanish in the limit. As an application, we provide a modified alternating least squares algorithm for nonnegative CP tensor factorization that converges to the stationary points of the reconstruction error with the same bound on the best-case rate of convergence. We also experimentally validate our results with both synthetic and real-world data.
The present paper considers leveraging network topology information to improve the convergence rate of ADMM for decentralized optimization, where networked nodes work collaboratively to minimize the objective. Such problems can be solved efficiently using ADMM via decomposing the objective into easier subproblems. Properly exploiting network topology can significantly improve the algorithm performance. Hybrid ADMM explores the direction of exploiting node information by taking into account node centrality but fails to utilize edge information. This paper fills the gap by incorporating both node and edge information and provides a novel convergence rate bound for decentralized ADMM that explicitly depends on network topology. Such a novel bound is attainable for certain class of problems, thus tight. The explicit dependence further suggests possible directions to optimal design of edge weights to achieve the best performance. Numerical experiments show that simple heuristic methods could achieve better performance, and also exhibits robustness to topology changes.
106 - Jianchao Bai , Deren Han , Hao Sun 2021
In this paper, we develop a symmetric accelerated stochastic Alternating Direction Method of Multipliers (SAS-ADMM) for solving separable convex optimization problems with linear constraints. The objective function is the sum of a possibly nonsmooth convex function and an average function of many smooth convex functions. Our proposed algorithm combines both ideas of ADMM and the techniques of accelerated stochastic gradient methods using variance reduction to solve the smooth subproblem. One main feature of SAS-ADMM {is} that its dual variable is symmetrically updated after each update of the separated primal variable, which would allow a more flexible and larger convergence region of the dual variable compared with that of standard deterministic or stochastic ADMM. This new stochastic optimization algorithm is shown to converge in expectation with $C{O}(1/T)$ convergence rate, where $T$ is the number of outer iterations. In addition, 3-block extensions of the algorithm and its variant of an accelerated stochastic augmented Lagrangian method are also discussed. Our preliminary numerical experiments indicate the proposed algorithm is very effective for solving separable optimization problems from big-data applications
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا