Do you want to publish a course? Click here

Distributional Robustness and Uncertainty Quantification for Rare Events

95   0   0.0 ( 0 )
 Added by Jeremiah Birrell
 Publication date 2019
  fields
and research's language is English




Ask ChatGPT about the research

Rare events, and more general risk-sensitive quantities-of-interest (QoIs), are significantly impacted by uncertainty in the tail behavior of a distribution. Uncertainty in the tail can take many different forms, each of which leads to a particular ambiguity set of alternative models. Distributional robustness bounds over such an ambiguity set constitute a stress-test of the model. In this paper we develop a method, utilizing Renyi-divergences, of constructing the ambiguity set that captures a user-specified form of tail-perturbation. We then obtain distributional robustness bounds (performance guarantees) for risk-sensitive QoIs over these ambiguity sets, using the known connection between Renyi-divergences and robustness for risk-sensitive QoIs. We also expand on this connection in several ways, including a generalization of the Donsker-Varadhan variational formula to Renyi divergences, and various tightness results. These ideas are illustrated through applications to uncertainty quantification in a model of lithium-ion battery failure, robustness of large deviations rate functions, and risk-sensitive distributionally robust optimization for option pricing.



rate research

Read More

We present a general framework for uncertainty quantification that is a mosaic of interconnected models. We define global first and second order structural and correlative sensitivity analyses for random counting measures acting on risk functionals of input-output maps. These are the ANOVA decomposition of the intensity measure and the decomposition of the random measure variance, each into subspaces. Orthogonal random measures furnish sensitivity distributions. We show that the random counting measure may be used to construct positive random fields, which admit decompositions of covariance and sensitivity indices and may be used to represent interacting particle systems. The first and second order global sensitivity analyses conveyed through random counting measures elucidate and integrate different notions of uncertainty quantification, and the global sensitivity analysis of random fields conveys the proportionate functional contributions to covariance. This framework complements others when used in conjunction with for instance algorithmic uncertainty and model selection uncertainty frameworks.
Information-theory based variational principles have proven effective at providing scalable uncertainty quantification (i.e. robustness) bounds for quantities of interest in the presence of nonparametric model-form uncertainty. In this work, we combine such variational formulas with functional inequalities (Poincar{e}, $log$-Sobolev, Liapunov functions) to derive explicit uncertainty quantification bounds for time-averaged observables, comparing a Markov process to a second (not necessarily Markov) process. These bounds are well-behaved in the infinite-time limit and apply to steady-states of both discrete and continuous-time Markov processes.
This work affords new insights into Bayesian CART in the context of structured wavelet shrinkage. The main thrust is to develop a formal inferential framework for Bayesian tree-based regression. We reframe Bayesian CART as a g-type prior which departs from the typical wavelet product priors by harnessing correlation induced by the tree topology. The practically used Bayesian CART priors are shown to attain adaptive near rate-minimax posterior concentration in the supremum norm in regression models. For the fundamental goal of uncertainty quantification, we construct adaptive confidence bands for the regression function with uniform coverage under self-similarity. In addition, we show that tree-posteriors enable optimal inference in the form of efficient confidence sets for smooth functionals of the regression function.
112 - Rui Tuo , Wenjia Wang 2020
Bayesian optimization is a class of global optimization techniques. It regards the underlying objective function as a realization of a Gaussian process. Although the outputs of Bayesian optimization are random according to the Gaussian process assumption, quantification of this uncertainty is rarely studied in the literature. In this work, we propose a novel approach to assess the output uncertainty of Bayesian optimization algorithms, in terms of constructing confidence regions of the maximum point or value of the objective function. These regions can be computed efficiently, and their confidence levels are guaranteed by newly developed uniform error bounds for sequential Gaussian process regression. Our theory provides a unified uncertainty quantification framework for all existing sequential sampling policies and stopping criteria.
Quantifying the impact of parametric and model-form uncertainty on the predictions of stochastic models is a key challenge in many applications. Previous work has shown that the relative entropy rate is an effective tool for deriving path-space uncertainty quantification (UQ) bounds on ergodic averages. In this work we identify appropriate information-theoretic objects for a wider range of quantities of interest on path-space, such as hitting times and exponentially discounted observables, and develop the corresponding UQ bounds. In addition, our method yields tighter UQ bounds, even in cases where previous relative-entropy-based methods also apply, e.g., for ergodic averages. We illustrate these results with examples from option pricing, non-reversible diffusion processes, stochastic control, semi-Markov queueing models, and expectations and distributions of hitting times.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا