ﻻ يوجد ملخص باللغة العربية
Distributionally robust optimization (DRO) is a widely used framework for optimizing objective functionals in the presence of both randomness and model-form uncertainty. A key step in the practical solution of many DRO problems is a tractable reformulation of the optimization over the chosen model ambiguity set, which is generally infinite dimensional. Previous works have solved this problem in the case where the objective functional is an expected value. In this paper we study objective functionals that are the sum of an expected value and a variance penalty term. We prove that the corresponding variance-penalized DRO problem over an $f$-divergence neighborhood can be reformulated as a finite-dimensional convex optimization problem. This result also provides tight uncertainty quantification bounds on the variance.
This paper expands the notion of robust moment problems to incorporate distributional ambiguity using Wasserstein distance as the ambiguity measure. The classical Chebyshev-Cantelli (zeroth partial moment) inequalities, Scarf and Lo (first partial mo
Distributionally robust supervised learning (DRSL) is emerging as a key paradigm for building reliable machine learning systems for real-world applications -- reflecting the need for classifiers and predictive models that are robust to the distributi
We solve a min-max problem in a robust exploratory mean-variance problem with drift uncertainty in this paper. It is verified that robust investors choose the Sharpe ratio with minimal $L^2$ norm in an admissible set. A reinforcement learning framewo
Probabilistic models are often trained by maximum likelihood, which corresponds to minimizing a specific f-divergence between the model and data distribution. In light of recent successes in training Generative Adversarial Networks, alternative non-l
Variational quantum eigensolver(VQE) typically minimizes energy with hybrid quantum-classical optimization, which aims to find the ground state. Here, we propose a VQE by minimizing energy variance, which is called as variance-VQE(VVQE). The VVQE can