Distributionally Robust Variance Minimization: Tight Variance Bounds over $f$-Divergence Neighborhoods


Abstract in English

Distributionally robust optimization (DRO) is a widely used framework for optimizing objective functionals in the presence of both randomness and model-form uncertainty. A key step in the practical solution of many DRO problems is a tractable reformulation of the optimization over the chosen model ambiguity set, which is generally infinite dimensional. Previous works have solved this problem in the case where the objective functional is an expected value. In this paper we study objective functionals that are the sum of an expected value and a variance penalty term. We prove that the corresponding variance-penalized DRO problem over an $f$-divergence neighborhood can be reformulated as a finite-dimensional convex optimization problem. This result also provides tight uncertainty quantification bounds on the variance.

Download