No Arabic abstract
We present a method to solve two-stage stochastic problems with fixed recourse when the uncertainty space can have either discrete or continuous distributions. Given a partition of the uncertainty space, the method is addressed to solve a discrete problem with one scenario for each element of the partition (sub-regions of the uncertainty space). Fixing first stage variables, we formulate a second stage subproblem for each element, and exploiting information from the dual of these problems, we provide conditions that the partition must satisfy to obtain the optimal solution. These conditions provide guidance on how to refine the partition, converging iteratively to the optimal solution. Results from computational experiments show how the method automatically refines the partition of the uncertainty space in the regions of interest for the problem. Our algorithm is a generalization of the adaptive partition-based method presented by Song & Luedtke (2015) for discrete distributions, extending its applicability to more general cases.
Stochastic gradient methods (SGMs) have been widely used for solving stochastic optimization problems. A majority of existing works assume no constraints or easy-to-project constraints. In this paper, we consider convex stochastic optimization problems with expectation constraints. For these problems, it is often extremely expensive to perform projection onto the feasible set. Several SGMs in the literature can be applied to solve the expectation-constrained stochastic problems. We propose a novel primal-dual type SGM based on the Lagrangian function. Different from existing methods, our method incorporates an adaptiveness technique to speed up convergence. At each iteration, our method inquires an unbiased stochastic subgradient of the Lagrangian function, and then it renews the primal variables by an adaptive-SGM update and the dual variables by a vanilla-SGM update. We show that the proposed method has a convergence rate of $O(1/sqrt{k})$ in terms of the objective error and the constraint violation. Although the convergence rate is the same as those of existing SGMs, we observe its significantly faster convergence than an existing non-adaptive primal-dual SGM and a primal SGM on solving the Neyman-Pearson classification and quadratically constrained quadratic programs. Furthermore, we modify the proposed method to solve convex-concave stochastic minimax problems, for which we perform adaptive-SGM updates to both primal and dual variables. A convergence rate of $O(1/sqrt{k})$ is also established to the modified method for solving minimax problems in terms of primal-dual gap.
We consider Benders decomposition for solving two-stage stochastic programs with complete recourse based on finite samples of the uncertain parameters. We define the Benders cuts binding at the final optimal solution or the ones significantly improving bounds over iterations as valuable cuts. We propose a learning-enhanced Benders decomposition (LearnBD) algorithm, which adds a cut classification step in each iteration to selectively generate cuts that are more likely to be valuable cuts. The LearnBD algorithm includes two phases: (i) sampling cuts and collecting information from training problems and (ii) solving testing problems with a support vector machine (SVM) cut classifier. We run the LearnBD algorithm on instances of capacitated facility location and multi-commodity network design under uncertain demand. Our results show that SVM cut classifier works effectively for identifying valuable cuts, and the LearnBD algorithm reduces the total solving time of all instances for different problems with various sizes and complexities.
Two-stage optimization with recourse model is an important and widely used model, which has been studied extensively these years. In this article, we will look at a new variant of it, called the two-stage optimization with recourse and revocation model. This new model differs from the traditional one in that one is allowed to revoke some of his earlier decisions and withdraw part of the earlier costs, which is not unlikely in many real applications, and is therefore considered to be more realistic under many situations. We will adopt several approaches to study this model. In fact, we will develop an LP rounding scheme for some cover problems and show that they can be solved using this scheme and an adaptation of the rounding approach for the deterministic counterpart, provided the polynomial scenario assumption. Stochastic uncapacitated facility location problem will also be studied to show that the approximation algorithm that worked for the two-stage with recourse model worked for this model as well. In addition, we will use other methods to study the model.
This paper applies the N-block PCPM algorithm to solve multi-scale multi-stage stochastic programs, with the application to electricity capacity expansion models. Numerical results show that the proposed simplified N-block PCPM algorithm, along with the hybrid decomposition method, exhibits much better scalability for solving the resulting deterministic, large-scale block-separable optimization problem when compared with the ADMM algorithm and the PHA algorithm. The superiority of the algorithms scalability is attributed to the two key features of the algorithm design: first, the proposed hybrid scenario-node-realization decomposition method with extended nonanticipativity constraints can decompose the original problem under various uncertainties of different temporal scales; second, when applying the N-block PCPM algorithm to solve the resulting deterministic, large-scale N-block convex optimization problem, the technique of orthogonal projection we exploit greatly simplifies the iteration steps and reduce the communication overhead among all computing units, which also contributes to the efficiency of the algorithm.
We consider stochastic programs conditional on some covariate information, where the only knowledge of the possible relationship between the uncertain parameters and the covariates is reduced to a finite data sample of their joint distribution. By exploiting the close link between the notion of trimmings of a probability measure and the partial mass transportation problem, we construct a data-driven Distributionally Robust Optimization (DRO) framework to hedge the decision against the intrinsic error in the process of inferring conditional information from limited joint data. We show that our approach is computationally as tractable as the standard (without side information) Wasserstein-metric-based DRO and enjoys performance guarantees. Furthermore, our DRO framework can be conveniently used to address data-driven decision-making problems under contaminated samples and naturally produces distributionally robu