ترغب بنشر مسار تعليمي؟ اضغط هنا

Efficient risk estimation via nested multilevel quasi-Monte Carlo simulation

203   0   0.0 ( 0 )
 نشر من قبل Zhijian He
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We consider the problem of estimating the probability of a large loss from a financial portfolio, where the future loss is expressed as a conditional expectation. Since the conditional expectation is intractable in most cases, one may resort to nested simulation. To reduce the complexity of nested simulation, we present a method that combines multilevel Monte Carlo (MLMC) and quasi-Monte Carlo (QMC). In the outer simulation, we use Monte Carlo to generate financial scenarios. In the inner simulation, we use QMC to estimate the portfolio loss in each scenario. We prove that using QMC can accelerate the convergence rates in both the crude nested simulation and the multilevel nested simulation. Under certain conditions, the complexity of MLMC can be reduced to $O(epsilon^{-2}(log epsilon)^2)$ by incorporating QMC. On the other hand, we find that MLMC encounters catastrophic coupling problem due to the existence of indicator functions. To remedy this, we propose a smoothed MLMC method which uses logistic sigmoid functions to approximate indicator functions. Numerical results show that the optimal complexity $O(epsilon^{-2})$ is almost attained when using QMC methods in both MLMC and smoothed MLMC, even in moderate high dimensions.



قيم البحث

اقرأ أيضاً

We investigate the problem of computing a nested expectation of the form $mathbb{P}[mathbb{E}[X|Y] !geq!0]!=!mathbb{E}[textrm{H}(mathbb{E}[X|Y])]$ where $textrm{H}$ is the Heaviside function. This nested expectation appears, for example, when estimat ing the probability of a large loss from a financial portfolio. We present a method that combines the idea of using Multilevel Monte Carlo (MLMC) for nested expectations with the idea of adaptively selecting the number of samples in the approximation of the inner expectation, as proposed by (Broadie et al., 2011). We propose and analyse an algorithm that adaptively selects the number of inner samples on each MLMC level and prove that the resulting MLMC method with adaptive sampling has an $mathcal{O}left( varepsilon^{-2}|logvarepsilon|^2 right)$ complexity to achieve a root mean-squared error $varepsilon$. The theoretical analysis is verified by numerical experiments on a simple model problem. We also present a stochastic root-finding algorithm that, combined with our adaptive methods, can be used to compute other risk measures such as Value-at-Risk (VaR) and Conditional Value-at-Risk (CVaR), with the latter being achieved with $mathcal{O}left(varepsilon^{-2}right)$ complexity.
324 - Zhijian He 2019
Conditional value at risk (CVaR) is a popular measure for quantifying portfolio risk. Sensitivity analysis of CVaR is very useful in risk management and gradient-based optimization algorithms. In this paper, we study the infinitesimal perturbation an alysis estimator for CVaR sensitivity using randomized quasi-Monte Carlo (RQMC) simulation. We first prove that the RQMC-based estimator is strongly consistent under very mild conditions. Under some technical conditions, RQMC that uses $d$-dimensional points in CVaR sensitivity estimation yields a mean error rate of $O(n^{-1/2-1/(4d-2)+epsilon})$ for arbitrarily small $epsilon>0$. The numerical results show that the RQMC method performs better than the Monte Carlo method for all cases. The gain of plain RQMC deteriorates as the dimension $d$ increases, as predicted by the established theoretical error rate.
Stochastic PDE eigenvalue problems often arise in the field of uncertainty quantification, whereby one seeks to quantify the uncertainty in an eigenvalue, or its eigenfunction. In this paper we present an efficient multilevel quasi-Monte Carlo (MLQMC ) algorithm for computing the expectation of the smallest eigenvalue of an elliptic eigenvalue problem with stochastic coefficients. Each sample evaluation requires the solution of a PDE eigenvalue problem, and so tackling this problem in practice is notoriously computationally difficult. We speed up the approximation of this expectation in four ways: 1) we use a multilevel variance reduction scheme to spread the work over a hierarchy of FE meshes and truncation dimensions; 2) we use QMC methods to efficiently compute the expectations on each level; 3) we exploit the smoothness in parameter space and reuse the eigenvector from a nearby QMC point to reduce the number of iterations of the eigensolver; and 4) we utilise a two-grid discretisation scheme to obtain the eigenvalue on the fine mesh with a single linear solve. The full error analysis of a basic MLQMC algorithm is given in the companion paper [Gilbert and Scheichl, 2021], and so in this paper we focus on how to further improve the efficiency and provide theoretical justification of the enhancement strategies 3) and 4). Numerical results are presented that show the efficiency of our algorithm, and also show that the four strategies we employ are complementary.
We propose a novel $hp$-multilevel Monte Carlo method for the quantification of uncertainties in the compressible Navier-Stokes equations, using the Discontinuous Galerkin method as deterministic solver. The multilevel approach exploits hierarchies o f uniformly refined meshes while simultaneously increasing the polynomial degree of the ansatz space. It allows for a very large range of resolutions in the physical space and thus an efficient decrease of the statistical error. We prove that the overall complexity of the $hp$-multilevel Monte Carlo method to compute the mean field with prescribed accuracy is, in best-case, of quadratic order with respect to the accuracy. We also propose a novel and simple approach to estimate a lower confidence bound for the optimal number of samples per level, which helps to prevent overestimating these quantities. The method is in particular designed for application on queue-based computing systems, where it is desirable to compute a large number of samples during one iteration, without overestimating the optimal number of samples. Our theoretical results are verified by numerical experiments for the two-dimensional compressible Navier-Stokes equations. In particular we consider a cavity flow problem from computational acoustics, demonstrating that the method is suitable to handle complex engineering problems.
In this work we develop a new hierarchical multilevel approach to generate Gaussian random field realizations in an algorithmically scalable manner that is well-suited to incorporate into multilevel Markov chain Monte Carlo (MCMC) algorithms. This ap proach builds off of other partial differential equation (PDE) approaches for generating Gaussian random field realizations; in particular, a single field realization may be formed by solving a reaction-diffusion PDE with a spatial white noise source function as the righthand side. While these approaches have been explored to accelerate forward uncertainty quantification tasks, e.g. multilevel Monte Carlo, the previous constructions are not directly applicable to multilevel MCMC frameworks which build fine scale random fields in a hierarchical fashion from coarse scale random fields. Our new hierarchical multilevel method relies on a hierarchical decomposition of the white noise source function in $L^2$ which allows us to form Gaussian random field realizations across multiple levels of discretization in a way that fits into multilevel MCMC algorithmic frameworks. After presenting our main theoretical results and numerical scaling results to showcase the utility of this new hierarchical PDE method for generating Gaussian random field realizations, this method is tested on a four-level MCMC algorithm to explore its feasibility.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا