ﻻ يوجد ملخص باللغة العربية
Specifying a proper input distribution is often a challenging task in simulation modeling. In practice, there may be multiple plausible distributions that can fit the input data reasonably well, especially when the data volume is not large. In this paper, we consider the problem of selecting the best from a finite set of simulated alternatives, in the presence of such input uncertainty. We model such uncertainty by an ambiguity set consisting of a finite number of plausible input distributions, and aim to select the alternative with the best worst-case mean performance over the ambiguity set. We refer to this problem as robust selection of the best (RSB). To solve the RSB problem, we develop a two-stage selection procedure and a sequential selection procedure; we then prove that both procedures can achieve at least a user-specified probability of correct selection under mild conditions. Extensive numerical experiments are conducted to investigate the computational efficiency of the two procedures. Finally, we apply the RSB approach to study a queueing systems staffing problem using synthetic data and an appointment-scheduling problem using real data from a large hospital in China. We find that the RSB approach can generate decisions significantly better than other widely used approaches.
We study the problem of bounding path-dependent expectations (within any finite time horizon $d$) over the class of discrete-time martingales whose marginal distributions lie within a prescribed tolerance of a given collection of benchmark marginal d
Conditional Value at Risk (CVaR) is widely used to account for the preferences of a risk-averse agent in the extreme loss scenarios. To study the effectiveness of randomization in interdiction games with an interdictor that is both risk and ambiguity
We study a methodology to tackle the NASA Langley Uncertainty Quantification Challenge problem, based on an integration of robust optimization, more specifically a recent line of research known as distributionally robust optimization, and importance
Distributionally robust supervised learning (DRSL) is emerging as a key paradigm for building reliable machine learning systems for real-world applications -- reflecting the need for classifiers and predictive models that are robust to the distributi
We propose kernel distributionally robust optimization (Kernel DRO) using insights from the robust optimization theory and functional analysis. Our method uses reproducing kernel Hilbert spaces (RKHS) to construct a wide range of convex ambiguity set