ﻻ يوجد ملخص باللغة العربية
Distributionally robust optimization (DRO) has been introduced for solving stochastic programs where the distribution of the random parameters is unknown and must be estimated by samples from that distribution. A key element of DRO is the construction of the ambiguity set, which is a set of distributions that covers the true distribution with a high probability. Assuming that the true distribution has a probability density function, we propose a class of ambiguity sets based on confidence bands of the true density function. The use of the confidence band enables us to take the prior knowledge of the shape of the underlying density function into consideration (e.g., unimodality or monotonicity). Using the confidence band constructed by density estimation techniques as the ambiguity set, we establish the convergence of the optimal value of DRO to that of the stochastic program as the sample size increases. However, the resulting DRO problem is computationally intractable, as it involves functional decision variables as well as infinitely many constraints. To address this challenge, using the duality theory, we reformulate it into a finite-dimensional stochastic program, which is amenable to a stochastic subgradient scheme as a solution method. We compare our approach with existing state-of-the-art DRO methods on the newsvendor problem and the portfolio management problem, and the numerical results showcase the advantage of our approach.
We propose kernel distributionally robust optimization (Kernel DRO) using insights from the robust optimization theory and functional analysis. Our method uses reproducing kernel Hilbert spaces (RKHS) to construct a wide range of convex ambiguity set
This paper studies distributionally robust optimization (DRO) when the ambiguity set is given by moments for the distributions. The objective and constraints are given by polynomials in decision variables. We reformulate the DRO with equivalent momen
We propose and analyze algorithms for distributionally robust optimization of convex losses with conditional value at risk (CVaR) and $chi^2$ divergence uncertainty sets. We prove that our algorithms require a number of gradient evaluations independe
Inverse multiobjective optimization provides a general framework for the unsupervised learning task of inferring parameters of a multiobjective decision making problem (DMP), based on a set of observed decisions from the human expert. However, the pe
We present a distributionally robust formulation of a stochastic optimization problem for non-i.i.d vector autoregressive data. We use the Wasserstein distance to define robustness in the space of distributions and we show, using duality theory, that