ﻻ يوجد ملخص باللغة العربية
We consider Benders decomposition for solving two-stage stochastic programs with complete recourse based on finite samples of the uncertain parameters. We define the Benders cuts binding at the final optimal solution or the ones significantly improving bounds over iterations as valuable cuts. We propose a learning-enhanced Benders decomposition (LearnBD) algorithm, which adds a cut classification step in each iteration to selectively generate cuts that are more likely to be valuable cuts. The LearnBD algorithm includes two phases: (i) sampling cuts and collecting information from training problems and (ii) solving testing problems with a support vector machine (SVM) cut classifier. We run the LearnBD algorithm on instances of capacitated facility location and multi-commodity network design under uncertain demand. Our results show that SVM cut classifier works effectively for identifying valuable cuts, and the LearnBD algorithm reduces the total solving time of all instances for different problems with various sizes and complexities.
We present a method to solve two-stage stochastic problems with fixed recourse when the uncertainty space can have either discrete or continuous distributions. Given a partition of the uncertainty space, the method is addressed to solve a discrete pr
Support vector machine is an important and fundamental technique in machine learning. In this paper, we apply a semismooth Newton method to solve two typical SVM models: the L2-loss SVC model and the epsilon-L2-loss SVR model. The semismooth Newton m
A widely-used tool for binary classification is the Support Vector Machine (SVM), a supervised learning technique that finds the maximum margin linear separator between the two classes. While SVMs have been well studied in the batch (offline) setting
Support vector machines (SVMs) are successful modeling and prediction tools with a variety of applications. Previous work has demonstrated the superiority of the SVMs in dealing with the high dimensional, low sample size problems. However, the numeri
We consider a two-stage stochastic optimization problem, in which a long-term optimization variable is coupled with a set of short-term optimization variables in both objective and constraint functions. Despite that two-stage stochastic optimization