ترغب بنشر مسار تعليمي؟ اضغط هنا

An indefinite-proximal-based strictly contractive Peaceman-Rachford splitting method

146   0   0.0 ( 0 )
 نشر من قبل Bo Jiang
 تاريخ النشر 2015
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

The Peaceman-Rachford splitting method is efficient for minimizing a convex optimization problem with a separable objective function and linear constraints. However, its convergence was not guaranteed without extra requirements. He et al. (SIAM J. Optim. 24: 1011 - 1040, 2014) proved the convergence of a strictly contractive Peaceman-Rachford splitting method by employing a suitable underdetermined relaxation factor. In this paper, we further extend the so-called strictly contractive Peaceman-Rachford splitting method by using two different relaxation factors. Besides, motivated by the recent advances on the ADMM type method with indefinite proximal terms, we employ the indefinite proximal term in the strictly contractive Peaceman-Rachford splitting method. We show that the proposed indefinite-proximal strictly contractive Peaceman-Rachford splitting method is convergent and also prove the $o(1/t)$ convergence rate in the nonergodic sense. The numerical tests on the $l_1$ regularized least square problem demonstrate the efficiency of the proposed method.



قيم البحث

اقرأ أيضاً

The last two decades witnessed the increasing of the interests on the absolute value equations (AVE) of finding $xinmathbb{R}^n$ such that $Ax-|x|-b=0$, where $Ain mathbb{R}^{ntimes n}$ and $bin mathbb{R}^n$. In this paper, we pay our attention on de signing efficient algorithms. To this end, we reformulate AVE to a generalized linear complementarity problem (GLCP), which, among the equivalent forms, is the most economical one in the sense that it does not increase the dimension of the variables. For solving the GLCP, we propose an inexact Douglas-Rachford splitting method which can adopt a relative error tolerance. As a consequence, in the inner iteration processes, we can employ the LSQR method ([C.C. Paige and M.A. Saunders, ACM Trans. Mathe. Softw. (TOMS), 8 (1982), pp. 43--71]) to find a qualified approximate solution for each subproblem, which makes the cost per iteration very low. We prove the convergence of the algorithm and establish its global linear rate of convergence. Comparing results with the popular algorithms such as the exact generalized Newton method [O.L. Mangasarian, Optim. Lett., 1 (2007), pp. 3--8], the inexact semi-smooth Newton method [J.Y.B. Cruz, O.P. Ferreira and L.F. Prudente, Comput. Optim. Appl., 65 (2016), pp. 93--108] and the exact SOR-like method [Y.-F. Ke and C.-F. Ma, Appl. Math. Comput., 311 (2017), pp. 195--202] are reported, which indicate that the proposed algorithm is very promising. Moreover, our method also extends the range of numerically solvable of the AVE; that is, it can deal with not only the case that $|A^{-1}|<1$, the commonly used in those existing literature, but also the case where $|A^{-1}|=1$.
Many large-scale and distributed optimization problems can be brought into a composite form in which the objective function is given by the sum of a smooth term and a nonsmooth regularizer. Such problems can be solved via a proximal gradient method a nd its variants, thereby generalizing gradient descent to a nonsmooth setup. In this paper, we view proximal algorithms as dynamical systems and leverage techniques from control theory to study their global properties. In particular, for problems with strongly convex objective functions, we utilize the theory of integral quadratic constraints to prove the global exponential stability of the equilibrium points of the differential equations that govern the evolution of proximal gradient and Douglas-Rachford splitting flows. In our analysis, we use the fact that these algorithms can be interpreted as variable-metric gradient methods on the suitable envelopes and exploit structural properties of the nonlinear terms that arise from the gradient of the smooth part of the objective function and the proximal operator associated with the nonsmooth regularizer. We also demonstrate that these envelopes can be obtained from the augmented Lagrangian associated with the original nonsmooth problem and establish conditions for global exponential convergence even in the absence of strong convexity.
The alternating direction multiplier method (ADMM) is widely used in computer graphics for solving optimization problems that can be nonsmooth and nonconvex. It converges quickly to an approximate solution, but can take a long time to converge to a s olution of high-accuracy. Previously, Anderson acceleration has been applied to ADMM, by treating it as a fixed-point iteration for the concatenation of the dual variables and a subset of the primal variables. In this paper, we note that the equivalence between ADMM and Douglas-Rachford splitting reveals that ADMM is in fact a fixed-point iteration in a lower-dimensional space. By applying Anderson acceleration to such lower-dimensional fixed-point iteration, we obtain a more effective approach for accelerating ADMM. We analyze the convergence of the proposed acceleration method on nonconvex problems, and verify its effectiveness on a variety of computer graphics problems including geometry processing and physical simulation.
240 - Tianyi Chen , Tianyu Ding , Bo Ji 2020
Sparsity-inducing regularization problems are ubiquitous in machine learning applications, ranging from feature selection to model compression. In this paper, we present a novel stochastic method -- Orthant Based Proximal Stochastic Gradient Method ( OBProx-SG) -- to solve perhaps the most popular instance, i.e., the l1-regularized problem. The OBProx-SG method contains two steps: (i) a proximal stochastic gradient step to predict a support cover of the solution; and (ii) an orthant step to aggressively enhance the sparsity level via orthant face projection. Compared to the state-of-the-art methods, e.g., Prox-SG, RDA and Prox-SVRG, the OBProx-SG not only converges to the global optimal solutions (in convex scenario) or the stationary points (in non-convex scenario), but also promotes the sparsity of the solutions substantially. Particularly, on a large number of convex problems, OBProx-SG outperforms the existing methods comprehensively in the aspect of sparsity exploration and objective values. Moreover, the experiments on non-convex deep neural networks, e.g., MobileNetV1 and ResNet18, further demonstrate its superiority by achieving the solutions of much higher sparsity without sacrificing generalization accuracy.
86 - Lei Yang , Kim-Chuan Toh 2021
In this paper, we develop an inexact Bregman proximal gradient (iBPG) method based on a novel two-point inexact stopping condition, and establish the iteration complexity of $mathcal{O}(1/k)$ as well as the convergence of the sequence under some prop er conditions. To improve the convergence speed, we further develop an inertial variant of our iBPG (denoted by v-iBPG) and show that it has the iteration complexity of $mathcal{O}(1/k^{gamma})$, where $gammageq1$ is a restricted relative smoothness exponent. Thus, when $gamma>1$, the v-iBPG readily improves the $mathcal{O}(1/k)$ convergence rate of the iBPG. In addition, for the case of using the squared Euclidean distance as the kernel function, we further develop a new inexact accelerated proximal gradient (iAPG) method, which can circumvent the underlying feasibility difficulty often appearing in existing inexact conditions and inherit all desirable convergence properties of the exact APG under proper summable-error conditions. Finally, we conduct some preliminary numerical experiments for solving a relaxation of the quadratic assignment problem to demonstrate the convergence behaviors of the iBPG, v-iBPG and iAPG under different inexactness settings.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا