ترغب بنشر مسار تعليمي؟ اضغط هنا

Distributionally Robust Variance Minimization: Tight Variance Bounds over $f$-Divergence Neighborhoods

96   0   0.0 ( 0 )
 نشر من قبل Jeremiah Birrell
 تاريخ النشر 2020
  مجال البحث
والبحث باللغة English
 تأليف Jeremiah Birrell




اسأل ChatGPT حول البحث

Distributionally robust optimization (DRO) is a widely used framework for optimizing objective functionals in the presence of both randomness and model-form uncertainty. A key step in the practical solution of many DRO problems is a tractable reformulation of the optimization over the chosen model ambiguity set, which is generally infinite dimensional. Previous works have solved this problem in the case where the objective functional is an expected value. In this paper we study objective functionals that are the sum of an expected value and a variance penalty term. We prove that the corresponding variance-penalized DRO problem over an $f$-divergence neighborhood can be reformulated as a finite-dimensional convex optimization problem. This result also provides tight uncertainty quantification bounds on the variance.

قيم البحث

اقرأ أيضاً

This paper expands the notion of robust moment problems to incorporate distributional ambiguity using Wasserstein distance as the ambiguity measure. The classical Chebyshev-Cantelli (zeroth partial moment) inequalities, Scarf and Lo (first partial mo ment) bounds, and semideviation (second partial moment) in one dimension are investigated. The infinite dimensional primal problems are formulated and the simpler finite dimensional dual problems are derived. A principal motivating question is how does data-driven distributional ambiguity affect the moment bounds. Towards answering this question, some theory is developed and computational experiments are conducted for specific problem instances in inventory control and portfolio management. Finally some open questions and suggestions for future research are discussed.
Distributionally robust supervised learning (DRSL) is emerging as a key paradigm for building reliable machine learning systems for real-world applications -- reflecting the need for classifiers and predictive models that are robust to the distributi on shifts that arise from phenomena such as selection bias or nonstationarity. Existing algorithms for solving Wasserstein DRSL -- one of the most popular DRSL frameworks based around robustness to perturbations in the Wasserstein distance -- involve solving complex subproblems or fail to make use of stochastic gradients, limiting their use in large-scale machine learning problems. We revisit Wasserstein DRSL through the lens of min-max optimization and derive scalable and efficiently implementable stochastic extra-gradient algorithms which provably achieve faster convergence rates than existing approaches. We demonstrate their effectiveness on synthetic and real data when compared to existing DRSL approaches. Key to our results is the use of variance reduction and random reshuffling to accelerate stochastic min-max optimization, the analysis of which may be of independent interest.
We solve a min-max problem in a robust exploratory mean-variance problem with drift uncertainty in this paper. It is verified that robust investors choose the Sharpe ratio with minimal $L^2$ norm in an admissible set. A reinforcement learning framewo rk in the mean-variance problem provides an exploration-exploitation trade-off mechanism; if we additionally consider model uncertainty, the robust strategy essentially weights more on exploitation rather than exploration and thus reflects a more conservative optimization scheme. Finally, we use financial data to backtest the performance of the robust exploratory investment and find that the robust strategy can outperform the purely exploratory strategy and resist the downside risk in a bear market.
Probabilistic models are often trained by maximum likelihood, which corresponds to minimizing a specific f-divergence between the model and data distribution. In light of recent successes in training Generative Adversarial Networks, alternative non-l ikelihood training criteria have been proposed. Whilst not necessarily statistically efficient, these alternatives may better match user requirements such as sharp image generation. A general variational method for training probabilistic latent variable models using maximum likelihood is well established; however, how to train latent variable models using other f-divergences is comparatively unknown. We discuss a variational approach that, when combined with the recently introduced Spread Divergence, can be applied to train a large class of latent variable models using any f-divergence.
Variational quantum eigensolver(VQE) typically minimizes energy with hybrid quantum-classical optimization, which aims to find the ground state. Here, we propose a VQE by minimizing energy variance, which is called as variance-VQE(VVQE). The VVQE can be viewed as an self-verifying eigensolver for arbitrary eigenstate by designing, since an eigenstate for a Hamiltonian should have zero energy variance. We demonstrate properties and advantages of VVQE for solving a set of excited states with quantum chemistry problems. Remarkably, we show that optimization of a combination of energy and variance may be more efficient to find low-energy excited states than those of minimizing energy or variance alone. We further reveal that the optimization can be boosted with stochastic gradient descent by Hamiltonian sampling, which uses only a few terms of the Hamiltonian and thus significantly reduces the quantum resource for evaluating variance and its gradients.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا