Do you want to publish a course? Click here

Distributionally Robust Optimization with Moment Ambiguity Sets

206   0   0.0 ( 0 )
 Added by Suhan Zhong
 Publication date 2021
  fields
and research's language is English




Ask ChatGPT about the research

This paper studies distributionally robust optimization (DRO) when the ambiguity set is given by moments for the distributions. The objective and constraints are given by polynomials in decision variables. We reformulate the DRO with equivalent moment conic constraints. Under some general assumptions, we prove the DRO is equivalent to a linear optimization problem with moment and psd polynomial cones. A moment-SOS relaxation method is proposed to solve it. Its asymptotic and finite convergence are shown under certain assumptions. Numerical examples are presented to show how to solve DRO problems.



rate research

Read More

128 - Xian Yu , Siqian Shen 2020
We study multistage distributionally robust mixed-integer programs under endogenous uncertainty, where the probability distribution of stage-wise uncertainty depends on the decisions made in previous stages. We first consider two ambiguity sets defined by decision-dependent bounds on the first and second moments of uncertain parameters and by mean and covariance matrix that exactly match decision-dependent empirical ones, respectively. For both sets, we show that the subproblem in each stage can be recast as a mixed-integer linear program (MILP). Moreover, we extend the general moment-based ambiguity set in (Delage and Ye, 2010) to the multistage decision-dependent setting, and derive mixed-integer semidefinite programming (MISDP) reformulations of stage-wise subproblems. We develop methods for attaining lower and upper bounds of the optimal objective value of the multistage MISDPs, and approximate them using a series of MILPs. We deploy the Stochastic Dual Dynamic integer Programming (SDDiP) method for solving the problem under the three ambiguity sets with risk-neutral or risk-averse objective functions, and conduct numerical studies on multistage facility-location instances having diverse sizes under different parameter and uncertainty settings. Our results show that the SDDiP quickly finds optimal solutions for moderate-sized instances under the first two ambiguity sets, and also finds good approximate bounds for the multistage MISDPs derived under the third ambiguity set. We also demonstrate the efficacy of incorporating decision-dependent distributional ambiguity in multistage decision-making processes.
We propose kernel distributionally robust optimization (Kernel DRO) using insights from the robust optimization theory and functional analysis. Our method uses reproducing kernel Hilbert spaces (RKHS) to construct a wide range of convex ambiguity sets, which can be generalized to sets based on integral probability metrics and finite-order moment bounds. This perspective unifies multiple existing robust and stochastic optimization methods. We prove a theorem that generalizes the classical duality in the mathematical problem of moments. Enabled by this theorem, we reformulate the maximization with respect to measures in DRO into the dual program that searches for RKHS functions. Using universal RKHSs, the theorem applies to a broad class of loss functions, lifting common limitations such as polynomial losses and knowledge of the Lipschitz constant. We then establish a connection between DRO and stochastic optimization with expectation constraints. Finally, we propose practical algorithms based on both batch convex solvers and stochastic functional gradient, which apply to general optimization and machine learning tasks.
238 - Chaosheng Dong , Bo Zeng 2020
Inverse multiobjective optimization provides a general framework for the unsupervised learning task of inferring parameters of a multiobjective decision making problem (DMP), based on a set of observed decisions from the human expert. However, the performance of this framework relies critically on the availability of an accurate DMP, sufficient decisions of high quality, and a parameter space that contains enough information about the DMP. To hedge against the uncertainties in the hypothetical DMP, the data, and the parameter space, we investigate in this paper the distributionally robust approach for inverse multiobjective optimization. Specifically, we leverage the Wasserstein metric to construct a ball centered at the empirical distribution of these decisions. We then formulate a Wasserstein distributionally robust inverse multiobjective optimization problem (WRO-IMOP) that minimizes a worst-case expected loss function, where the worst case is taken over all distributions in the Wasserstein ball. We show that the excess risk of the WRO-IMOP estimator has a sub-linear convergence rate. Furthermore, we propose the semi-infinite reformulations of the WRO-IMOP and develop a cutting-plane algorithm that converges to an approximate solution in finite iterations. Finally, we demonstrate the effectiveness of our method on both a synthetic multiobjective quadratic program and a real world portfolio optimization problem.
We propose and analyze algorithms for distributionally robust optimization of convex losses with conditional value at risk (CVaR) and $chi^2$ divergence uncertainty sets. We prove that our algorithms require a number of gradient evaluations independent of training set size and number of parameters, making them suitable for large-scale applications. For $chi^2$ uncertainty sets these are the first such guarantees in the literature, and for CVaR our guarantees scale linearly in the uncertainty level rather than quadratically as in previous work. We also provide lower bounds proving the worst-case optimality of our algorithms for CVaR and a penalized version of the $chi^2$ problem. Our primary technical contributions are novel bounds on the bias of batch robust risk estimation and the variance of a multilevel Monte Carlo gradient estimator due to [Blanchet & Glynn, 2015]. Experiments on MNIST and ImageNet confirm the theoretical scaling of our algorithms, which are 9--36 times more efficient than full-batch methods.
336 - Xi Chen , Qihang Lin , Guanglin Xu 2019
Distributionally robust optimization (DRO) has been introduced for solving stochastic programs where the distribution of the random parameters is unknown and must be estimated by samples from that distribution. A key element of DRO is the construction of the ambiguity set, which is a set of distributions that covers the true distribution with a high probability. Assuming that the true distribution has a probability density function, we propose a class of ambiguity sets based on confidence bands of the true density function. The use of the confidence band enables us to take the prior knowledge of the shape of the underlying density function into consideration (e.g., unimodality or monotonicity). Using the confidence band constructed by density estimation techniques as the ambiguity set, we establish the convergence of the optimal value of DRO to that of the stochastic program as the sample size increases. However, the resulting DRO problem is computationally intractable, as it involves functional decision variables as well as infinitely many constraints. To address this challenge, using the duality theory, we reformulate it into a finite-dimensional stochastic program, which is amenable to a stochastic subgradient scheme as a solution method. We compare our approach with existing state-of-the-art DRO methods on the newsvendor problem and the portfolio management problem, and the numerical results showcase the advantage of our approach.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا