Do you want to publish a course? Click here

General parameterized proximal point algorithm with applications in statistical learning

275   0   0.0 ( 0 )
 Added by Jianchao Bai
 Publication date 2018
  fields
and research's language is English




Ask ChatGPT about the research

In the literature, there are a few researches to design some parameters in the Proximal Point Algorithm (PPA), especially for the multi-objective convex optimizations. Introducing some parameters to PPA can make it more flexible and attractive. Mainly motivated by our recent work (Bai et al., A parameterized proximal point algorithm for separable convex optimization, Optim. Lett. (2017) doi: 10.1007/s11590-017-1195-9), in this paper we develop a general parameterized PPA with a relaxation step for solving the multi-block separable structured convex programming. By making use of the variational inequality and some mathematical identities, the global convergence and the worst-case $mathcal{O}(1/t)$ convergence rate of the proposed algorithm are established. Preliminary numerical experiments on solving a sparse matrix minimization problem from statistical learning validate that our algorithm is more efficient than several state-of-the-art algorithms.



rate research

Read More

In this paper, we develop a parameterized proximal point algorithm (P-PPA) for solving a class of separable convex programming problems subject to linear and convex constraints. The proposed algorithm is provable to be globally convergent with a worst-case O(1/t) convergence rate, wheret denotes the iteration number. By properly choosing the algorithm parameters, numerical experiments on solving a sparse optimization problem arising from statistical learning show that our P-PPA could perform significantly better than other state-of-the-art methods, such as the alternating direction method of multipliers and the relaxed proximal point algorithm.
93 - Scott B. Lindstrom 2021
Friedlander, Mac^{e}do, and Pong recently introduced the projected polar proximal point algorithm (P4A) for solving optimization problems by using the closed perspective transforms of convex objectives. We analyse a generalization (GP4A) which replaces the closed perspective transform with a more general closed gauge. We decompose GP4A into the iterative application of two separate operators, and analyse it as a splitting method. By showing that GP4A and its under-relaxations exhibit global convergence whenever a fixed point exists, we obtain convergence guarantees for P4A by letting the gauge specify to the closed perspective transform for a convex function. We then provide easy-to-verify sufficient conditions for the existence of fixed points for the GP4A, using the Minkowski function representation of the gauge. Conveniently, the approach reveals that global minimizers of the objective function for P4A form an exposed face of the dilated fundamental set of the closed perspective transform.
We introduce and investigate a new generalized convexity notion for functions called prox-convexity. The proximity operator of such a function is single-valued and firmly nonexpansive. We provide examples of (strongly) quasiconvex, weakly convex, and DC (difference of convex) functions that are prox-convex, however none of these classes fully contains the one of prox-convex functions or is included into it. We show that the classical proximal point algorithm remains convergent when the convexity of the proper lower semicontinuous function to be minimized is relaxed to prox-convexity.
We consider a stochastic variational inequality (SVI) problem with a continuous and monotone mapping over a closed and convex set. In strongly monotone regimes, we present a variable sample-size averaging scheme (VS-Ave) that achieves a linear rate with an optimal oracle complexity. In addition, the iteration complexity is shown to display a muted dependence on the condition number compared with standard variance-reduced projection schemes. To contend with merely monotone maps, we develop amongst the first proximal-point algorithms with variable sample-sizes (PPAWSS), where increasingly accurate solutions of strongly monotone SVIs are obtained via (VS-Ave) at every step. This allows for achieving a sublinear convergence rate that matches that obtained for deterministic monotone VIs. Preliminary numerical evidence suggests that the schemes compares well with competing schemes.
112 - Lei Yang , Kim-Chuan Toh 2021
We study a general convex optimization problem, which covers various classic problems in different areas and particularly includes many optimal transport related problems arising in recent years. To solve this problem, we revisit the classic Bregman proximal point algorithm (BPPA) and introduce a new inexact stopping condition for solving the subproblems, which can circumvent the underlying feasibility difficulty often appearing in existing inexact conditions when the problem has a complex feasible set. Our inexact condition also covers several existing inexact conditions and hence, as a byproduct, we actually develop a certain unified inexact framework for BPPA. This makes our inexact BPPA (iBPPA) more flexible to fit different scenarios in practice. In particular, as an application to the standard optimal transport (OT) problem, our iBPPA with the entropic proximal term can bypass some numerical instability issues that usually plague the well-recognized entropic regularization approach in the OT community, since our iBPPA does not require the proximal parameter to be very small for obtaining an accurate approximate solution. The iteration complexity of $mathcal{O}(1/k)$ and the convergence of the sequence are also established for our iBPPA under some mild conditions. Moreover, inspired by Nesterovs acceleration technique, we develop a variant of our iBPPA, denoted by V-iBPPA, and establish the iteration complexity of $mathcal{O}(1/k^{lambda})$, where $lambdageq1$ is a quadrangle scaling exponent of the kernel function. Some preliminary numerical experiments for solving the standard OT problem are conducted to show the convergence behaviors of our iBPPA and V-iBPPA under different inexactness settings. The experiments also empirically verify the potential of our V-iBPPA on improving the convergence speed.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا