Do you want to publish a course? Click here

An Efficient Augmented Lagrangian Method for Support Vector Machine

80   0   0.0 ( 0 )
 Added by Qingna Li
 Publication date 2019
  fields
and research's language is English




Ask ChatGPT about the research

Support vector machine (SVM) has proved to be a successful approach for machine learning. Two typical SVM models are the L1-loss model for support vector classification (SVC) and $epsilon$-L1-loss model for support vector regression (SVR). Due to the nonsmoothness of the L1-loss function in the two models, most of the traditional approaches focus on solving the dual problem. In this paper, we propose an augmented Lagrangian method for the L1-loss model, which is designed to solve the primal problem. By tackling the nonsmooth term in the model with Moreau-Yosida regularization and the proximal operator, the subproblem in augmented Lagrangian method reduces to a nonsmooth linear system, which can be solved via the quadratically convergent semismooth Newtons method. Moreover, the high computational cost in semismooth Newtons method can be significantly reduced by exploring the sparse structure in the generalized Jacobian. Numerical results on various datasets in LIBLINEAR show that the proposed method is competitive with the most popular solvers in both speed and accuracy.



rate research

Read More

Support vector machines (SVMs) are successful modeling and prediction tools with a variety of applications. Previous work has demonstrated the superiority of the SVMs in dealing with the high dimensional, low sample size problems. However, the numerical difficulties of the SVMs will become severe with the increase of the sample size. Although there exist many solvers for the SVMs, only few of them are designed by exploiting the special structures of the SVMs. In this paper, we propose a highly efficient sparse semismooth Newton based augmented Lagrangian method for solving a large-scale convex quadratic programming problem with a linear equality constraint and a simple box constraint, which is generated from the dual problems of the SVMs. By leveraging the primal-dual error bound result, the fast local convergence rate of the augmented Lagrangian method can be guaranteed. Furthermore, by exploiting the second-order sparsity of the problem when using the semismooth Newton method,the algorithm can efficiently solve the aforementioned difficult problems. Finally, numerical comparisons demonstrate that the proposed algorithm outperforms the current state-of-the-art solvers for the large-scale SVMs.
118 - Hongpeng Sun 2020
Total generalization variation (TGV) is a very powerful and important regularization for various inverse problems and computer vision tasks. In this paper, we proposed a semismooth Newton based augmented Lagrangian method to solve this problem. The augmented Lagrangian method (also called as method of multipliers) is widely used for lots of smooth or nonsmooth variational problems. However, its efficiency usually heavily depends on solving the coupled and nonlinear system together and simultaneously, which is very complicated and highly coupled for total generalization variation. With efficient primal-dual semismooth Newton methods for the complicated linear subproblems involving total generalized variation, we investigated a highly efficient and competitive algorithm compared to some efficient first-order method. With the analysis of the metric subregularities of the corresponding functions, we give both the global convergence and local linear convergence rate for the proposed augmented Lagrangian methods.
306 - Hongpeng Sun 2019
Augmented Lagrangian method (also called as method of multipliers) is an important and powerful optimization method for lots of smooth or nonsmooth variational problems in modern signal processing, imaging, optimal control and so on. However, one usually needs to solve the coupled and nonlinear system together and simultaneously, which is very challenging. In this paper, we proposed several semismooth Newton methods to solve the nonlinear subproblems arising in image restoration, which leads to several highly efficient and competitive algorithms for imaging processing. With the analysis of the metric subregularities of the corresponding functions, we give both the global convergence and local linear convergence rate for the proposed augmented Lagrangian methods with semismooth Newton solvers.
We propose an efficient algorithm for sparse signal reconstruction problems. The proposed algorithm is an augmented Lagrangian method based on the dual sparse reconstruction problem. It is efficient when the number of unknown variables is much larger than the number of observations because of the dual formulation. Moreover, the primal variable is explicitly updated and the sparsity in the solution is exploited. Numerical comparison with the state-of-the-art algorithms shows that the proposed algorithm is favorable when the design matrix is poorly conditioned or dense and very large.
Shape-constrained convex regression problem deals with fitting a convex function to the observed data, where additional constraints are imposed, such as component-wise monotonicity and uniform Lipschitz continuity. This paper provides a unified framework for computing the least squares estimator of a multivariate shape-constrained convex regression function in $mathbb{R}^d$. We prove that the least squares estimator is computable via solving an essentially constrained convex quadratic programming (QP) problem with $(n+1)d$ variables, $n(n-1)$ linear inequality constraints and $n$ possibly non-polyhedral inequality constraints, where $n$ is the number of data points. To efficiently solve the generally very large-scale convex QP, we design a proximal augmented Lagrangian method (proxALM) whose subproblems are solved by the semismooth Newton method (SSN). To further accelerate the computation when $n$ is huge, we design a practical implementation of the constraint generation method such that each reduced problem is efficiently solved by our proposed proxALM. Comprehensive numerical experiments, including those in the pricing of basket options and estimation of production functions in economics, demonstrate that our proposed proxALM outperforms the state-of-the-art algorithms, and the proposed acceleration technique further shortens the computation time by a large margin.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا