No Arabic abstract
Quasi-Monte Carlo methods are designed for integrands of bounded variation, and this excludes singular integrands. Several methods are known for integrands that become singular on the boundary of the unit cube $[0,1]^d$ or at isolated possibly unknown points within $[0,1]^d$. Here we consider functions on the square $[0,1]^2$ that may become singular as the point approaches the diagonal line $x_1=x_2$, and we study three quadrature methods. The first method splits the square into two triangles separated by a region around the line of singularity, and applies recently developed triangle QMC rules to the two triangular parts. For functions with a singularity `no worse than $|x_1-x_2|^{-A}$ for $0<A<1$ that method yields an error of $O( (log(n)/n)^{(1-A)/2})$. We also consider methods extending the integrand into a region containing the singularity and show that method will not improve up on using two triangles. Finally, we consider transforming the integrand to have a more QMC-friendly singularity along the boundary of the square. This then leads to error rates of $O(n^{-1+epsilon+A})$ when combined with some corner-avoiding Halton points or with randomized QMC, but it requires some stronger assumptions on the original singular integrand.
Conditional value at risk (CVaR) is a popular measure for quantifying portfolio risk. Sensitivity analysis of CVaR is very useful in risk management and gradient-based optimization algorithms. In this paper, we study the infinitesimal perturbation analysis estimator for CVaR sensitivity using randomized quasi-Monte Carlo (RQMC) simulation. We first prove that the RQMC-based estimator is strongly consistent under very mild conditions. Under some technical conditions, RQMC that uses $d$-dimensional points in CVaR sensitivity estimation yields a mean error rate of $O(n^{-1/2-1/(4d-2)+epsilon})$ for arbitrarily small $epsilon>0$. The numerical results show that the RQMC method performs better than the Monte Carlo method for all cases. The gain of plain RQMC deteriorates as the dimension $d$ increases, as predicted by the established theoretical error rate.
This paper studies randomized quasi-Monte Carlo (QMC) sampling for discontinuous integrands having singularities along the boundary of the unit cube $[0,1]^d$. Both discontinuities and singularities are extremely common in the pricing and hedging of financial derivatives and have a tremendous impact on the accuracy of QMC. It was previously known that the root mean square error of randomized QMC is only $o(n^{-1/2})$ for discontinuous functions with singularities. We find that under some mild conditions, randomized QMC yields an expected error of $O(n^{-1/2-1/(4d-2)+epsilon})$ for arbitrarily small $epsilon>0$. Moreover, one can get a better rate if the boundary of discontinuities is parallel to some coordinate axes. As a by-product, we find that the expected error rate attains $O(n^{-1+epsilon})$ if the discontinuities are QMC-friendly, in the sense that all the discontinuity boundaries are parallel to coordinate axes. The results can be used to assess the QMC accuracy for some typical problems from financial engineering.
In this work we develop a new hierarchical multilevel approach to generate Gaussian random field realizations in an algorithmically scalable manner that is well-suited to incorporate into multilevel Markov chain Monte Carlo (MCMC) algorithms. This approach builds off of other partial differential equation (PDE) approaches for generating Gaussian random field realizations; in particular, a single field realization may be formed by solving a reaction-diffusion PDE with a spatial white noise source function as the righthand side. While these approaches have been explored to accelerate forward uncertainty quantification tasks, e.g. multilevel Monte Carlo, the previous constructions are not directly applicable to multilevel MCMC frameworks which build fine scale random fields in a hierarchical fashion from coarse scale random fields. Our new hierarchical multilevel method relies on a hierarchical decomposition of the white noise source function in $L^2$ which allows us to form Gaussian random field realizations across multiple levels of discretization in a way that fits into multilevel MCMC algorithmic frameworks. After presenting our main theoretical results and numerical scaling results to showcase the utility of this new hierarchical PDE method for generating Gaussian random field realizations, this method is tested on a four-level MCMC algorithm to explore its feasibility.
We consider the problem of estimating the probability of a large loss from a financial portfolio, where the future loss is expressed as a conditional expectation. Since the conditional expectation is intractable in most cases, one may resort to nested simulation. To reduce the complexity of nested simulation, we present a method that combines multilevel Monte Carlo (MLMC) and quasi-Monte Carlo (QMC). In the outer simulation, we use Monte Carlo to generate financial scenarios. In the inner simulation, we use QMC to estimate the portfolio loss in each scenario. We prove that using QMC can accelerate the convergence rates in both the crude nested simulation and the multilevel nested simulation. Under certain conditions, the complexity of MLMC can be reduced to $O(epsilon^{-2}(log epsilon)^2)$ by incorporating QMC. On the other hand, we find that MLMC encounters catastrophic coupling problem due to the existence of indicator functions. To remedy this, we propose a smoothed MLMC method which uses logistic sigmoid functions to approximate indicator functions. Numerical results show that the optimal complexity $O(epsilon^{-2})$ is almost attained when using QMC methods in both MLMC and smoothed MLMC, even in moderate high dimensions.
Practitioners wishing to experience the efficiency gains from using low discrepancy sequences need correct, well-written software. This article, based on our MCQMC 2020 tutorial, describes some of the better quasi-Monte Carlo (QMC) software available. We highlight the key software components required to approximate multivariate integrals or expectations of functions of vector random variables by QMC. We have combined these components in QMCPy, a Python open source library, which we hope will draw the support of the QMC community. Here we introduce QMCPy.