ترغب بنشر مسار تعليمي؟ اضغط هنا

A learning scheme by sparse grids and Picard approximations for semilinear parabolic PDEs

88   0   0.0 ( 0 )
 نشر من قبل Junchao Chen
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Relying on the classical connection between Backward Stochastic Differential Equations (BSDEs) and non-linear parabolic partial differential equations (PDEs), we propose a new probabilistic learning scheme for solving high-dimensional semi-linear parabolic PDEs. This scheme is inspired by the approach coming from machine learning and developed using deep neural networks in Han and al. [32]. Our algorithm is based on a Picard iteration scheme in which a sequence of linear-quadratic optimisation problem is solved by means of stochastic gradient descent (SGD) algorithm. In the framework of a linear specification of the approximation space, we manage to prove a convergence result for our scheme, under some smallness condition. In practice, in order to be able to treat high-dimensional examples, we employ sparse grid approximation spaces. In the case of periodic coefficients and using pre-wavelet basis functions, we obtain an upper bound on the global complexity of our method. It shows in particular that the curse of dimensionality is tamed in the sense that in order to achieve a root mean squared error of order ${epsilon}$, for a prescribed precision ${epsilon}$, the complexity of the Picard algorithm grows polynomially in ${epsilon}^{-1}$ up to some logarithmic factor $ |log({epsilon})| $ which grows linearly with respect to the PDE dimension. Various numerical results are presented to validate the performance of our method and to compare them with some recent machine learning schemes proposed in Han and al. [20] and Hure and al. [37].

قيم البحث

اقرأ أيضاً

In this paper we introduce a new approach to compute rigorously solutions of Cauchy problems for a class of semi-linear parabolic partial differential equations. Expanding solutions with Chebyshev series in time and Fourier series in space, we introd uce a zero finding problem $F(a)=0$ on a Banach algebra $X$ of Fourier-Chebyshev sequences, whose solution solves the Cauchy problem. The challenge lies in the fact that the linear part $mathcal{L} := DF(0)$ has an infinite block diagonal structure with blocks becoming less and less diagonal dominant at infinity. We introduce analytic estimates to show that $mathcal{L}$ is a boundedly invertible linear operator on $X$, and we obtain explicit, rigorous and computable bounds for the operator norm $| mathcal{L}^{-1}|_{B(X)}$. These bounds are then used to verify the hypotheses of a Newton-Kantorovich type argument which shows that the (Newton-like) operator $mathcal{T}(a) := a - mathcal{L}^{-1} F(a)$ is a contraction on a small ball centered at a numerical approximation of the Cauchy problem. The contraction mapping theorem yields a fixed point which corresponds to a classical (strong) solution of the Cauchy problem. The approach is simple to implement, numerically stable and is applicable to a class of PDE models, which include for instance Fishers equation, the Kuramoto-Sivashinsky equation, the Swift-Hohenberg equation and the phase-field crystal (PFC) equation. We apply our approach to each of these models and report plausible experimental results, which motivate further research on the method.
This work is a follow-up to our previous contribution (Convergence of sparse collocation for functions of countably many Gaussian random variables (with application to elliptic PDEs), SIAM J. Numer. Anal., 2018), and contains further insights on some aspects of the solution of elliptic PDEs with lognormal diffusion coefficients using sparse grids. Specifically, we first focus on the choice of univariate interpolation rules, advocating the use of Gaussian Leja points as introduced by Narayan and Jakeman (Adaptive Leja sparse grid constructions for stochastic collocation and high-dimensional approximation, SIAM J. Sci. Comput., 2014) and then discuss the possible computational advantages of replacing the standard Karhunen-Lo`eve expansion of the diffusion coefficient with the Levy-Ciesielski expansion, motivated by theoretical work of Bachmayr, Cohen, DeVore, and Migliorati (Sparse polynomial approximation of parametric elliptic PDEs. part II: lognormal coefficients, ESAIM: M2AN, 2016). Our numerical results indicate that, for the problem under consideration, Gaussian Leja collocation points outperform Gauss-Hermite and Genz-Keister nodes for the sparse grid approximation and that the Karhunen-Lo`eve expansion of the log diffusion coefficient is more appropriate than its Levy-Ciesielski expansion for purpose of sparse grid collocation.
We consider adaptive approximations of the parameter-to-solution map for elliptic operator equations depending on a large or infinite number of parameters, comparing approximation strategies of different degrees of nonlinearity: sparse polynomial exp ansions, general low-rank approximations separating spatial and parametric variables, and hierarchical tensor decompositions separating all variables. We describe corresponding adaptive algorithms based on a common generic template and show their near-optimality with respect to natural approximability assumptions for each type of approximation. A central ingredient in the resulting bounds for the total computational complexity are new operator compression results for the case of infinitely many parameters. We conclude with a comparison of the complexity estimates based on the actual approximability properties of classes of parametric model problems, which shows that the computational costs of optimized low-rank expansions can be significantly lower or higher than those of sparse polynomial expansions, depending on the particular type of parametric problem.
Hessian operators arising in inverse problems governed by partial differential equations (PDEs) play a critical role in delivering efficient, dimension-independent convergence for both Newton solution of deterministic inverse problems, as well as Mar kov chain Monte Carlo sampling of posteriors in the Bayesian setting. These methods require the ability to repeatedly perform such operations on the Hessian as multiplication with arbitrary vectors, solving linear systems, inversion, and (inverse) square root. Unfortunately, the Hessian is a (formally) dense, implicitly-defined operator that is intractable to form explicitly for practical inverse problems, requiring as many PDE solves as inversion parameters. Low rank approximations are effective when the data contain limited information about the parameters, but become prohibitive as the data become more informative. However, the Hessians for many inverse problems arising in practical applications can be well approximated by matrices that have hierarchically low rank structure. Hierarchical matrix representations promise to overcome the high complexity of dense representations and provide effective data structures and matrix operations that have only log-linear complexity. In this work, we describe algorithms for constructing and updating hierarchical matrix approximations of Hessians, and illustrate them on a number of representative inverse problems involving time-dependent diffusion, advection-dominated transport, frequency domain acoustic wave propagation, and low frequency Maxwell equations, demonstrating up to an order of magnitude speedup compared to globally low rank approximations.
106 - Qiang Du , Lili Ju , Xiao Li 2020
The ubiquity of semilinear parabolic equations has been illustrated in their numerous applications ranging from physics, biology, to materials and social sciences. In this paper, we consider a practically desirable property for a class of semilinear parabolic equations of the abstract form $u_t=mathcal{L}u+f[u]$ with $mathcal{L}$ being a linear dissipative operator and $f$ being a nonlinear operator in space, namely a time-invariant maximum bound principle, in the sense that the time-dependent solution $u$ preserves for all time a uniform pointwise bound in absolute value imposed by its initial and boundary conditions. We first study an analytical framework for some sufficient conditions on $mathcal{L}$ and $f$ that lead to such a maximum bound principle for the time-continuous dynamic system of infinite or finite dimensions. Then, we utilize a suitable exponential time differencing approach with a properly chosen generator of contraction semigroup to develop first- and second-order accurate temporal discretization schemes, that satisfy the maximum bound principle unconditionally in the time-discrete setting. Error estimates of the proposed schemes are derived along with their energy stability. Extensions to vector- and matrix-valued systems are also discussed. We demonstrate that the abstract framework and analysis techniques developed here offer an effective and unified approach to study the maximum bound principle of the abstract evolution equation that cover a wide variety of well-known models and their numerical discretization schemes. Some numerical experiments are also carried out to verify the theoretical results.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا