Do you want to publish a course? Click here

Maximal Margin Distribution Support Vector Regression with coupled Constraints-based Convex Optimization

293   0   0.0 ( 0 )
 Added by Gaoyang Li
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

Support vector regression (SVR) is one of the most popular machine learning algorithms aiming to generate the optimal regression curve through maximizing the minimal margin of selected training samples, i.e., support vectors. Recent researchers reveal that maximizing the margin distribution of whole training dataset rather than the minimal margin of a few support vectors, is prone to achieve better generalization performance. However, the margin distribution support vector regression machines suffer difficulties resulted from solving a non-convex quadratic optimization, compared to the margin distribution strategy for support vector classification, This paper firstly proposes a maximal margin distribution model for SVR(MMD-SVR), then implementing coupled constrain factor to convert the non-convex quadratic optimization to a convex problem with linear constrains, which enhance the training feasibility and efficiency for SVR to derived from maximizing the margin distribution. The theoretical and empirical analysis illustrates the superiority of MMD-SVR. In addition, numerical experiments show that MMD-SVR could significantly improve the accuracy of prediction and generate more smooth regression curve with better generalization compared with the classic SVR.



rate research

Read More

We present an improved algorithm for properly learning convex polytopes in the realizable PAC setting from data with a margin. Our learning algorithm constructs a consistent polytope as an intersection of about $t log t$ halfspaces with margins in time polynomial in $t$ (where $t$ is the number of halfspaces forming an optimal polytope). We also identify distinct generalizations of the notion of margin from hyperplanes to polytopes and investigate how they relate geometrically; this result may be of interest beyond the learning setting.
Insurance industry is one of the most vulnerable sectors to climate change. Assessment of future number of claims and incurred losses is critical for disaster preparedness and risk management. In this project, we study the effect of precipitation on a joint dynamics of weather-induced home insurance claims and losses. We discuss utility and limitations of such machine learning procedures as Support Vector Machines and Artificial Neural Networks, in forecasting future claim dynamics and evaluating associated uncertainties. We illustrate our approach by application to attribution analysis and forecasting of weather-induced home insurance claims in a middle-sized city in the Canadian Prairies.
352 - Xinyue Shen , Yuantao Gu 2017
In this work we propose to fit a sparse logistic regression model by a weakly convex regularized nonconvex optimization problem. The idea is based on the finding that a weakly convex function as an approximation of the $ell_0$ pseudo norm is able to better induce sparsity than the commonly used $ell_1$ norm. For a class of weakly convex sparsity inducing functions, we prove the nonconvexity of the corresponding sparse logistic regression problem, and study its local optimality conditions and the choice of the regularization parameter to exclude trivial solutions. Despite the nonconvexity, a method based on proximal gradient descent is used to solve the general weakly convex sparse logistic regression, and its convergence behavior is studied theoretically. Then the general framework is applied to a specific weakly convex function, and a necessary and sufficient local optimality condition is provided. The solution method is instantiated in this case as an iterative firm-shrinkage algorithm, and its effectiveness is demonstrated in numerical experiments by both randomly generated and real datasets.
In this paper, we reformulate the forest representation learning approach as an additive model which boosts the augmented feature instead of the prediction. We substantially improve the upper bound of generalization gap from $mathcal{O}(sqrtfrac{ln m}{m})$ to $mathcal{O}(frac{ln m}{m})$, while $lambda$ - the margin ratio between the margin standard deviation and the margin mean is small enough. This tighter upper bound inspires us to optimize the margin distribution ratio $lambda$. Therefore, we design the margin distribution reweighting approach (mdDF) to achieve small ratio $lambda$ by boosting the augmented feature. Experiments and visualizations confirm the effectiveness of the approach in terms of performance and representation learning ability. This study offers a novel understanding of the cascaded deep forest from the margin-theory perspective and further uses the mdDF approach to guide the layer-by-layer forest representation learning.
A convex optimization model predicts an output from an input by solving a convex optimization problem. The class of convex optimization models is large, and includes as special cases many well-known models like linear and logistic regression. We propose a heuristic for learning the parameters in a convex optimization model given a dataset of input-output pairs, using recently developed methods for differentiating the solution of a convex optimization problem with respect to its parameters. We describe three general classes of convex optimization models, maximum a posteriori (MAP) models, utility maximization models, and agent models, and present a numerical experiment for each.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا