ترغب بنشر مسار تعليمي؟ اضغط هنا

Quasi-Monte Carlo for discontinuous integrands with singularities along the boundary of the unit cube

166   0   0.0 ( 0 )
 نشر من قبل Zhijian He
 تاريخ النشر 2017
  مجال البحث
والبحث باللغة English
 تأليف Zhijian He




اسأل ChatGPT حول البحث

This paper studies randomized quasi-Monte Carlo (QMC) sampling for discontinuous integrands having singularities along the boundary of the unit cube $[0,1]^d$. Both discontinuities and singularities are extremely common in the pricing and hedging of financial derivatives and have a tremendous impact on the accuracy of QMC. It was previously known that the root mean square error of randomized QMC is only $o(n^{-1/2})$ for discontinuous functions with singularities. We find that under some mild conditions, randomized QMC yields an expected error of $O(n^{-1/2-1/(4d-2)+epsilon})$ for arbitrarily small $epsilon>0$. Moreover, one can get a better rate if the boundary of discontinuities is parallel to some coordinate axes. As a by-product, we find that the expected error rate attains $O(n^{-1+epsilon})$ if the discontinuities are QMC-friendly, in the sense that all the discontinuity boundaries are parallel to coordinate axes. The results can be used to assess the QMC accuracy for some typical problems from financial engineering.



قيم البحث

اقرأ أيضاً

75 - Zhijian He 2017
This paper studies the rate of convergence for conditional quasi-Monte Carlo (QMC), which is a counterpart of conditional Monte Carlo. We focus on discontinuous integrands defined on the whole of $R^d$, which can be unbounded. Under suitable conditio ns, we show that conditional QMC not only has the smoothing effect (up to infinitely times differentiable), but also can bring orders of magnitude reduction in integration error compared to plain QMC. Particularly, for some typical problems in options pricing and Greeks estimation, conditional randomized QMC that uses $n$ samples yields a mean error of $O(n^{-1+epsilon})$ for arbitrarily small $epsilon>0$. As a by-product, we find that this rate also applies to randomized QMC integration with all terms of the ANOVA decomposition of the discontinuous integrand, except the one of highest order.
70 - Kinjal Basu , Art B. Owen 2016
Quasi-Monte Carlo methods are designed for integrands of bounded variation, and this excludes singular integrands. Several methods are known for integrands that become singular on the boundary of the unit cube $[0,1]^d$ or at isolated possibly unknow n points within $[0,1]^d$. Here we consider functions on the square $[0,1]^2$ that may become singular as the point approaches the diagonal line $x_1=x_2$, and we study three quadrature methods. The first method splits the square into two triangles separated by a region around the line of singularity, and applies recently developed triangle QMC rules to the two triangular parts. For functions with a singularity `no worse than $|x_1-x_2|^{-A}$ for $0<A<1$ that method yields an error of $O( (log(n)/n)^{(1-A)/2})$. We also consider methods extending the integrand into a region containing the singularity and show that method will not improve up on using two triangles. Finally, we consider transforming the integrand to have a more QMC-friendly singularity along the boundary of the square. This then leads to error rates of $O(n^{-1+epsilon+A})$ when combined with some corner-avoiding Halton points or with randomized QMC, but it requires some stronger assumptions on the original singular integrand.
310 - Zhijian He , Xiaoqun Wang 2017
Quantiles and expected shortfalls are usually used to measure risks of stochastic systems, which are often estimated by Monte Carlo methods. This paper focuses on the use of quasi-Monte Carlo (QMC) method, whose convergence rate is asymptotically bet ter than Monte Carlo in the numerical integration. We first prove the convergence of QMC-based quantile estimates under very mild conditions, and then establish a deterministic error bound of $O(N^{-1/d})$ for the quantile estimates, where $d$ is the dimension of the QMC point sets used in the simulation and $N$ is the sample size. Under certain conditions, we show that the mean squared error (MSE) of the randomized QMC estimate for expected shortfall is $o(N^{-1})$. Moreover, under stronger conditions the MSE can be improved to $O(N^{-1-1/(2d-1)+epsilon})$ for arbitrarily small $epsilon>0$.
We present a novel algorithmic approach and an error analysis leveraging Quasi-Monte Carlo points for training deep neural network (DNN) surrogates of Data-to-Observable (DtO) maps in engineering design. Our analysis reveals higher-order consistent, deterministic choices of training points in the input data space for deep and shallow Neural Networks with holomorphic activation functions such as tanh. These novel training points are proved to facilitate higher-order decay (in terms of the number of training samples) of the underlying generalization error, with consistency error bounds that are free from the curse of dimensionality in the input data space, provided that DNN weights in hidden layers satisfy certain summability conditions. We present numerical experiments for DtO maps from elliptic and parabolic PDEs with uncertain inputs that confirm the theoretical analysis.
We consider the problem of estimating the probability of a large loss from a financial portfolio, where the future loss is expressed as a conditional expectation. Since the conditional expectation is intractable in most cases, one may resort to neste d simulation. To reduce the complexity of nested simulation, we present a method that combines multilevel Monte Carlo (MLMC) and quasi-Monte Carlo (QMC). In the outer simulation, we use Monte Carlo to generate financial scenarios. In the inner simulation, we use QMC to estimate the portfolio loss in each scenario. We prove that using QMC can accelerate the convergence rates in both the crude nested simulation and the multilevel nested simulation. Under certain conditions, the complexity of MLMC can be reduced to $O(epsilon^{-2}(log epsilon)^2)$ by incorporating QMC. On the other hand, we find that MLMC encounters catastrophic coupling problem due to the existence of indicator functions. To remedy this, we propose a smoothed MLMC method which uses logistic sigmoid functions to approximate indicator functions. Numerical results show that the optimal complexity $O(epsilon^{-2})$ is almost attained when using QMC methods in both MLMC and smoothed MLMC, even in moderate high dimensions.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا