ترغب بنشر مسار تعليمي؟ اضغط هنا

On Expansions and Nodes for Sparse Grid Collocation of Lognormal Elliptic PDEs

345   0   0.0 ( 0 )
 نشر من قبل Bj\\\"orn Sprungk
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

This work is a follow-up to our previous contribution (Convergence of sparse collocation for functions of countably many Gaussian random variables (with application to elliptic PDEs), SIAM J. Numer. Anal., 2018), and contains further insights on some aspects of the solution of elliptic PDEs with lognormal diffusion coefficients using sparse grids. Specifically, we first focus on the choice of univariate interpolation rules, advocating the use of Gaussian Leja points as introduced by Narayan and Jakeman (Adaptive Leja sparse grid constructions for stochastic collocation and high-dimensional approximation, SIAM J. Sci. Comput., 2014) and then discuss the possible computational advantages of replacing the standard Karhunen-Lo`eve expansion of the diffusion coefficient with the Levy-Ciesielski expansion, motivated by theoretical work of Bachmayr, Cohen, DeVore, and Migliorati (Sparse polynomial approximation of parametric elliptic PDEs. part II: lognormal coefficients, ESAIM: M2AN, 2016). Our numerical results indicate that, for the problem under consideration, Gaussian Leja collocation points outperform Gauss-Hermite and Genz-Keister nodes for the sparse grid approximation and that the Karhunen-Lo`eve expansion of the log diffusion coefficient is more appropriate than its Levy-Ciesielski expansion for purpose of sparse grid collocation.



قيم البحث

اقرأ أيضاً

This paper studies numerical methods for the approximation of elliptic PDEs with lognormal coefficients of the form $-{rm div}(a abla u)=f$ where $a=exp(b)$ and $b$ is a Gaussian random field. The approximant of the solution $u$ is an $n$-term polyno mial expansion in the scalar Gaussian random variables that parametrize $b$. We present a general convergence analysis of weighted least-squares approximants for smooth and arbitrarily rough random field, using a suitable random design, for which we prove optimality in the following sense: their convergence rate matches exactly or closely the rate that has been established in cite{BCDM} for best $n$-term approximation by Hermite polynomials, under the same minimial assumptions on the Gaussian random field. This is in contrast with the current state of the art results for the stochastic Galerkin method that suffers the lack of coercivity due to the lognormal nature of the diffusion field. Numerical tests with $b$ as the Brownian bridge confirm our theoretical findings.
Relying on the classical connection between Backward Stochastic Differential Equations (BSDEs) and non-linear parabolic partial differential equations (PDEs), we propose a new probabilistic learning scheme for solving high-dimensional semi-linear par abolic PDEs. This scheme is inspired by the approach coming from machine learning and developed using deep neural networks in Han and al. [32]. Our algorithm is based on a Picard iteration scheme in which a sequence of linear-quadratic optimisation problem is solved by means of stochastic gradient descent (SGD) algorithm. In the framework of a linear specification of the approximation space, we manage to prove a convergence result for our scheme, under some smallness condition. In practice, in order to be able to treat high-dimensional examples, we employ sparse grid approximation spaces. In the case of periodic coefficients and using pre-wavelet basis functions, we obtain an upper bound on the global complexity of our method. It shows in particular that the curse of dimensionality is tamed in the sense that in order to achieve a root mean squared error of order ${epsilon}$, for a prescribed precision ${epsilon}$, the complexity of the Picard algorithm grows polynomially in ${epsilon}^{-1}$ up to some logarithmic factor $ |log({epsilon})| $ which grows linearly with respect to the PDE dimension. Various numerical results are presented to validate the performance of our method and to compare them with some recent machine learning schemes proposed in Han and al. [20] and Hure and al. [37].
We give a convergence proof for the approximation by sparse collocation of Hilbert-space-valued functions depending on countably many Gaussian random variables. Such functions appear as solutions of elliptic PDEs with lognormal diffusion coefficients . We outline a general $L^2$-convergence theory based on previous work by Bachmayr et al. (2016) and Chen (2016) and establish an algebraic convergence rate for sufficiently smooth functions assuming a mild growth bound for the univariate hierarchical surpluses of the interpolation scheme applied to Hermite polynomials. We verify specifically for Gauss-Hermite nodes that this assumption holds and also show algebraic convergence w.r.t. the resulting number of sparse grid points for this case. Numerical experiments illustrate the dimension-independent convergence rate.
We examine sparse grid quadrature on weighted tensor products (WTP) of reproducing kernel Hilbert spaces on products of the unit sphere, in the case of worst case quadrature error for rules with arbitrary quadrature weights. We describe a dimension a daptive quadrature algorithm based on an algorithm of Hegland (2003), and also formulate a version of Wasilkowski and Wozniakowskis WTP algorithm (1999), here called the WW algorithm. We prove that the dimension adaptive algorithm is optimal in the sense of Dantzig (1957) and therefore no greater in cost than the WW algorithm. Both algorithms therefore have the optimal asymptotic rate of convergence given by Theorem 3 of Wasilkowski and Wozniakowski (1999). A numerical example shows that, even though the asymptotic convergence rate is optimal, if the dimension weights decay slowly enough, and the dimensionality of the problem is large enough, the initial convergence of the dimension adaptive algorithm can be slow.
Convergence of an adaptive collocation method for the stationary parametric diffusion equation with finite-dimensional affine coefficient is shown. The adaptive algorithm relies on a recently introduced residual-based reliable a posteriori error esti mator. For the convergence proof, a strategy recently used for a stochastic Galerkin method with an hierarchical error estimator is transferred to the collocation setting. Extensions to other variants of adaptive collocation methods (including the classical one proposed in the paper Dimension-adaptive tensor-product quadratuture Computing (2003) by T. Gerstner and M. Griebel) is explored.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا