ترغب بنشر مسار تعليمي؟ اضغط هنا

The Projected Polar Proximal Point Algorithm Converges Globally

94   0   0.0 ( 0 )
 نشر من قبل Scott Lindstrom
 تاريخ النشر 2021
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

Friedlander, Mac^{e}do, and Pong recently introduced the projected polar proximal point algorithm (P4A) for solving optimization problems by using the closed perspective transforms of convex objectives. We analyse a generalization (GP4A) which replaces the closed perspective transform with a more general closed gauge. We decompose GP4A into the iterative application of two separate operators, and analyse it as a splitting method. By showing that GP4A and its under-relaxations exhibit global convergence whenever a fixed point exists, we obtain convergence guarantees for P4A by letting the gauge specify to the closed perspective transform for a convex function. We then provide easy-to-verify sufficient conditions for the existence of fixed points for the GP4A, using the Minkowski function representation of the gauge. Conveniently, the approach reveals that global minimizers of the objective function for P4A form an exposed face of the dilated fundamental set of the closed perspective transform.

قيم البحث

اقرأ أيضاً

We introduce and investigate a new generalized convexity notion for functions called prox-convexity. The proximity operator of such a function is single-valued and firmly nonexpansive. We provide examples of (strongly) quasiconvex, weakly convex, and DC (difference of convex) functions that are prox-convex, however none of these classes fully contains the one of prox-convex functions or is included into it. We show that the classical proximal point algorithm remains convergent when the convexity of the proper lower semicontinuous function to be minimized is relaxed to prox-convexity.
In this paper, we develop a parameterized proximal point algorithm (P-PPA) for solving a class of separable convex programming problems subject to linear and convex constraints. The proposed algorithm is provable to be globally convergent with a wors t-case O(1/t) convergence rate, wheret denotes the iteration number. By properly choosing the algorithm parameters, numerical experiments on solving a sparse optimization problem arising from statistical learning show that our P-PPA could perform significantly better than other state-of-the-art methods, such as the alternating direction method of multipliers and the relaxed proximal point algorithm.
In the literature, there are a few researches to design some parameters in the Proximal Point Algorithm (PPA), especially for the multi-objective convex optimizations. Introducing some parameters to PPA can make it more flexible and attractive. Mainl y motivated by our recent work (Bai et al., A parameterized proximal point algorithm for separable convex optimization, Optim. Lett. (2017) doi: 10.1007/s11590-017-1195-9), in this paper we develop a general parameterized PPA with a relaxation step for solving the multi-block separable structured convex programming. By making use of the variational inequality and some mathematical identities, the global convergence and the worst-case $mathcal{O}(1/t)$ convergence rate of the proposed algorithm are established. Preliminary numerical experiments on solving a sparse matrix minimization problem from statistical learning validate that our algorithm is more efficient than several state-of-the-art algorithms.
112 - Lei Yang , Kim-Chuan Toh 2021
We study a general convex optimization problem, which covers various classic problems in different areas and particularly includes many optimal transport related problems arising in recent years. To solve this problem, we revisit the classic Bregman proximal point algorithm (BPPA) and introduce a new inexact stopping condition for solving the subproblems, which can circumvent the underlying feasibility difficulty often appearing in existing inexact conditions when the problem has a complex feasible set. Our inexact condition also covers several existing inexact conditions and hence, as a byproduct, we actually develop a certain unified inexact framework for BPPA. This makes our inexact BPPA (iBPPA) more flexible to fit different scenarios in practice. In particular, as an application to the standard optimal transport (OT) problem, our iBPPA with the entropic proximal term can bypass some numerical instability issues that usually plague the well-recognized entropic regularization approach in the OT community, since our iBPPA does not require the proximal parameter to be very small for obtaining an accurate approximate solution. The iteration complexity of $mathcal{O}(1/k)$ and the convergence of the sequence are also established for our iBPPA under some mild conditions. Moreover, inspired by Nesterovs acceleration technique, we develop a variant of our iBPPA, denoted by V-iBPPA, and establish the iteration complexity of $mathcal{O}(1/k^{lambda})$, where $lambdageq1$ is a quadrangle scaling exponent of the kernel function. Some preliminary numerical experiments for solving the standard OT problem are conducted to show the convergence behaviors of our iBPPA and V-iBPPA under different inexactness settings. The experiments also empirically verify the potential of our V-iBPPA on improving the convergence speed.
64 - D. Leventhal 2009
We examine the linear convergence rates of variants of the proximal point method for finding zeros of maximal monotone operators. We begin by showing how metric subregularity is sufficient for linear convergence to a zero of a maximal monotone operat or. This result is then generalized to obtain convergence rates for the problem of finding a common zero of multiple monotone operators by considering randomized and averaged proximal methods.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا