ترغب بنشر مسار تعليمي؟ اضغط هنا

Equivalence between Sobolev spaces of first-order dominating mixed smoothness and unanchored ANOVA spaces on $mathbb{R}^d$

116   0   0.0 ( 0 )
 نشر من قبل Alexander Gilbert
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We prove that a variant of the classical Sobolev space of first-order dominating mixed smoothness is equivalent (under a certain condition) to the unanchored ANOVA space on $mathbb{R}^d$, for $d geq 1$. Both spaces are Hilbert spaces involving weight functions, which determine the behaviour as different variables tend to $pm infty$, and weight parameters, which represent the influence of different subsets of variables. The unanchored ANOVA space on $mathbb{R}^d$ was initially introduced by Nichols & Kuo in 2014 to analyse the error of quasi-Monte Carlo (QMC) approximations for integrals on unbounded domains; whereas the classical Sobolev space of dominating mixed smoothness was used as the setting in a series of papers by Griebel, Kuo & Sloan on the smoothing effect of integration, in an effort to develop a rigorous theory on why QMC methods work so well for certain non-smooth integrands with kinks or jumps coming from option pricing problems. In this same setting, Griewank, Kuo, Leovey & Sloan in 2018 subsequently extended these ideas by developing a practical smoothing by preintegration technique to approximate integrals of such functions with kinks or jumps. We first prove the equivalence in one dimension (itself a non-trivial task), before following a similar, but more complicated, strategy to prove the equivalence for general dimensions. As a consequence of this equivalence, we analyse applying QMC combined with a preintegration step to approximate the fair price of an Asian option, and prove that the error of such an approximation using $N$ points converges at a rate close to $1/N$.

قيم البحث

اقرأ أيضاً

We study minimax density estimation on the product space $mathbb{R}^{d_1}timesmathbb{R}^{d_2}$. We consider $L^p$-risk for probability density functions defined over regularity spaces that allow for different level of smoothness in each of the variab les. Precisely, we study probabilities on Sobolev spaces with dominating mixed-smoothness. We provide the rate of convergence that is optimal even for the classical Sobolev spaces.
We continue our investigations on pointwise multipliers for Besov spaces of dominating mixed smoothness. This time we study the algebra property of the classes $S^r_{p,q}B(mathbb{R}^d)$ with respect to pointwise multiplication. In addition if $pleq q $, we are able to describe the space of all pointwise multipliers for $S^r_{p,q}B(mathbb{R}^d)$.
Let $S subset mathbb{R}^{n}$ be a~closed set such that for some $d in [0,n]$ and $varepsilon > 0$ the~$d$-Hausdorff content $mathcal{H}^{d}_{infty}(S cap Q(x,r)) geq varepsilon r^{d}$ for all cubes~$Q(x,r)$ centered in~$x in S$ with side length $2r i n (0,2]$. For every $p in (1,infty)$, denote by $W_{p}^{1}(mathbb{R}^{n})$ the classical Sobolev space on $mathbb{R}^{n}$. We give an~intrinsic characterization of the restriction $W_{p}^{1}(mathbb{R}^{n})|_{S}$ of the space $W_{p}^{1}(mathbb{R}^{n})$ to~the set $S$ provided that $p > max{1,n-d}$. Furthermore, we prove the existence of a bounded linear operator $operatorname{Ext}:W_{p}^{1}(mathbb{R}^{n})|_{S} to W_{p}^{1}(mathbb{R}^{n})$ such that $operatorname{Ext}$ is right inverse for the usual trace operator. In particular, for $p > n-1$ we characterize the trace space of the Sobolev space $W_{p}^{1}(mathbb{R}^{n})$ to the closure $overline{Omega}$ of an arbitrary open path-connected set~$Omega$. Our results extend those available for $p in (1,n]$ with much more stringent restrictions on~$S$.
150 - Dirk Nuyens , Yuya Suzuki 2021
We show that to approximate the integral $int_{mathbb{R}^d} f(boldsymbol{x}) mathrm{d} boldsymbol{x}$ one can simply use scaled lattice rules from the unit cube $[0,1]^d$ to properly sized boxes on $mathbb{R}^d$, achieving higher-order convergence th at matches the smoothness of the integrand function $f$ in a certain Sobolev space of dominating mixed smoothness. Our method only assumes that we can evaluate the integrand function $f$ and does not assume a particular density nor the ability to sample from it. In particular, for the analysis we show that the method of adding Bernoulli polynomials to a function to make it ``periodic on a box without changing its integral value over the box, is equivalent to an orthogonal projection from a well chosen Sobolev space of dominating mixed smoothness to an associated periodic Sobolev space of the same dominating mixed smoothness, which we call a Korobov space. We note that the Bernoulli polynomial method is often not used because of its computational complexity and also here we completely avoid applying it. Instead, we use it as a theoretical tool in the error analysis of applying a scaled lattice rule to increasing boxes in order to approximate integrals over the $d$-dimensional Euclidean space. Such a method would not work on the unit cube since then the committed error caused by non-periodicity of the integrand would be constant, but for integration on the Euclidean space we can use the certain decay towards zero when the boxes grow. Hence we can bound the truncation error as well as the projection error and show higher-order convergence in applying scaled lattice rules for integration on Euclidean space. We illustrate our theoretical analysis by numerical experiments which confirm our findings.
161 - Sean Hon , Haizhao Yang 2021
We establish in this work approximation results of deep neural networks for smooth functions measured in Sobolev norms, motivated by recent development of numerical solvers for partial differential equations using deep neural networks. The error boun ds are explicitly characterized in terms of both the width and depth of the networks simultaneously. Namely, for $fin C^s([0,1]^d)$, we show that deep ReLU networks of width $mathcal{O}(Nlog{N})$ and of depth $mathcal{O}(Llog{L})$ can achieve a non-asymptotic approximation rate of $mathcal{O}(N^{-2(s-1)/d}L^{-2(s-1)/d})$ with respect to the $mathcal{W}^{1,p}([0,1]^d)$ norm for $pin[1,infty)$. If either the ReLU function or its square is applied as activation functions to construct deep neural networks of width $mathcal{O}(Nlog{N})$ and of depth $mathcal{O}(Llog{L})$ to approximate $fin C^s([0,1]^d)$, the non-asymptotic approximation rate is $mathcal{O}(N^{-2(s-n)/d}L^{-2(s-n)/d})$ with respect to the $mathcal{W}^{n,p}([0,1]^d)$ norm for $pin[1,infty)$.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا