No Arabic abstract
Quasi-Monte Carlo (QMC) method is a useful numerical tool for pricing and hedging of complex financial derivatives. These problems are usually of high dimensionality and discontinuities. The two factors may significantly deteriorate the performance of the QMC method. This paper develops an integrated method that overcomes the challenges of the high dimensionality and discontinuities concurrently. For this purpose, a smoothing method is proposed to remove the discontinuities for some typical functions arising from financial engineering. To make the smoothing method applicable for more general functions, a new path generation method is designed for simulating the paths of the underlying assets such that the resulting function has the required form. The new path generation method has an additional power to reduce the effective dimension of the target function. Our proposed method caters for a large variety of model specifications, including the Black-Scholes, exponential normal inverse Gaussian Levy, and Heston models. Numerical experiments dealing with these models show that in the QMC setting the proposed smoothing method in combination with the new path generation method can lead to a dramatic variance reduction for pricing exotic options with discontinuous payoffs and for calculating options Greeks. The investigation on the effective dimension and the related characteristics explains the significant enhancement of the combined procedure.
GPU computing has become popular in computational finance and many financial institutions are moving their CPU based applications to the GPU platform. Since most Monte Carlo algorithms are embarrassingly parallel, they benefit greatly from parallel implementations, and consequently Monte Carlo has become a focal point in GPU computing. GPU speed-up examples reported in the literature often involve Monte Carlo algorithms, and there are software tools commercially available that help migrate Monte Carlo financial pricing models to GPU. We present a survey of Monte Carlo and randomized quasi-Monte Carlo methods, and discuss existing (quasi) Monte Carlo sequences in GPU libraries. We discuss specific features of GPU architecture relevant for developing efficient (quasi) Monte Carlo methods. We introduce a recent randomized quasi-Monte Carlo method, and compare it with some of the existing implementations on GPU, when they are used in pricing caplets in the LIBOR market model and mortgage backed securities.
Quasi-Monte Carlo methods are designed for integrands of bounded variation, and this excludes singular integrands. Several methods are known for integrands that become singular on the boundary of the unit cube $[0,1]^d$ or at isolated possibly unknown points within $[0,1]^d$. Here we consider functions on the square $[0,1]^2$ that may become singular as the point approaches the diagonal line $x_1=x_2$, and we study three quadrature methods. The first method splits the square into two triangles separated by a region around the line of singularity, and applies recently developed triangle QMC rules to the two triangular parts. For functions with a singularity `no worse than $|x_1-x_2|^{-A}$ for $0<A<1$ that method yields an error of $O( (log(n)/n)^{(1-A)/2})$. We also consider methods extending the integrand into a region containing the singularity and show that method will not improve up on using two triangles. Finally, we consider transforming the integrand to have a more QMC-friendly singularity along the boundary of the square. This then leads to error rates of $O(n^{-1+epsilon+A})$ when combined with some corner-avoiding Halton points or with randomized QMC, but it requires some stronger assumptions on the original singular integrand.
Stochastic PDE eigenvalue problems often arise in the field of uncertainty quantification, whereby one seeks to quantify the uncertainty in an eigenvalue, or its eigenfunction. In this paper we present an efficient multilevel quasi-Monte Carlo (MLQMC) algorithm for computing the expectation of the smallest eigenvalue of an elliptic eigenvalue problem with stochastic coefficients. Each sample evaluation requires the solution of a PDE eigenvalue problem, and so tackling this problem in practice is notoriously computationally difficult. We speed up the approximation of this expectation in four ways: 1) we use a multilevel variance reduction scheme to spread the work over a hierarchy of FE meshes and truncation dimensions; 2) we use QMC methods to efficiently compute the expectations on each level; 3) we exploit the smoothness in parameter space and reuse the eigenvector from a nearby QMC point to reduce the number of iterations of the eigensolver; and 4) we utilise a two-grid discretisation scheme to obtain the eigenvalue on the fine mesh with a single linear solve. The full error analysis of a basic MLQMC algorithm is given in the companion paper [Gilbert and Scheichl, 2021], and so in this paper we focus on how to further improve the efficiency and provide theoretical justification of the enhancement strategies 3) and 4). Numerical results are presented that show the efficiency of our algorithm, and also show that the four strategies we employ are complementary.
Multifidelity Monte Carlo methods rely on a hierarchy of possibly less accurate but statistically correlated simplified or reduced models, in order to accelerate the estimation of statistics of high-fidelity models without compromising the accuracy of the estimates. This approach has recently gained widespread attention in uncertainty quantification. This is partly due to the availability of optimal strategies for the estimation of the expectation of scalar quantities-of-interest. In practice, the optimal strategy for the expectation is also used for the estimation of variance and sensitivity indices. However, a general strategy is still lacking for vector-valued problems, nonlinearly statistically-dependent models, and estimators for which a closed-form expression of the error is unavailable. The focus of the present work is to generalize the standard multifidelity estimators to the above cases. The proposed generalized estimators lead to an optimization problem that can be solved analytically and whose coefficients can be estimated numerically with few runs of the high- and low-fidelity models. We analyze the performance of the proposed approach on a selected number of experiments, with a particular focus on cardiac electrophysiology, where a hierarchy of physics-based low-fidelity models is readily available.
We introduce a new method for the numerical approximation of time-harmonic acoustic scattering problems stemming from material inhomogeneities. The method works for any frequency $omega$, but is especially efficient for high-frequency problems. It is based on a time-domain approach and consists of three steps: emph{i)} computation of a suitable incoming plane wavelet with compact support in the propagation direction; emph{ii)} solving a scattering problem in the time domain for the incoming plane wavelet; emph{iii)} reconstruction of the time-harmonic solution from the time-domain solution via a Fourier transform in time. An essential ingredient of the new method is a front-tracking mesh adaptation algorithm for solving the problem in emph{ii)}. By exploiting the limited support of the wave front, this allows us to make the number of the required degrees of freedom to reach a given accuracy significantly less dependent on the frequency $omega$, as shown in the numerical experiments. We also present a new algorithm for computing the Fourier transform in emph{iii)} that exploits the reduced number of degrees of freedom corresponding to the adapted meshes.