No Arabic abstract
In this paper we introduce a new approach to compute rigorously solutions of Cauchy problems for a class of semi-linear parabolic partial differential equations. Expanding solutions with Chebyshev series in time and Fourier series in space, we introduce a zero finding problem $F(a)=0$ on a Banach algebra $X$ of Fourier-Chebyshev sequences, whose solution solves the Cauchy problem. The challenge lies in the fact that the linear part $mathcal{L} := DF(0)$ has an infinite block diagonal structure with blocks becoming less and less diagonal dominant at infinity. We introduce analytic estimates to show that $mathcal{L}$ is a boundedly invertible linear operator on $X$, and we obtain explicit, rigorous and computable bounds for the operator norm $| mathcal{L}^{-1}|_{B(X)}$. These bounds are then used to verify the hypotheses of a Newton-Kantorovich type argument which shows that the (Newton-like) operator $mathcal{T}(a) := a - mathcal{L}^{-1} F(a)$ is a contraction on a small ball centered at a numerical approximation of the Cauchy problem. The contraction mapping theorem yields a fixed point which corresponds to a classical (strong) solution of the Cauchy problem. The approach is simple to implement, numerically stable and is applicable to a class of PDE models, which include for instance Fishers equation, the Kuramoto-Sivashinsky equation, the Swift-Hohenberg equation and the phase-field crystal (PFC) equation. We apply our approach to each of these models and report plausible experimental results, which motivate further research on the method.
Relying on the classical connection between Backward Stochastic Differential Equations (BSDEs) and non-linear parabolic partial differential equations (PDEs), we propose a new probabilistic learning scheme for solving high-dimensional semi-linear parabolic PDEs. This scheme is inspired by the approach coming from machine learning and developed using deep neural networks in Han and al. [32]. Our algorithm is based on a Picard iteration scheme in which a sequence of linear-quadratic optimisation problem is solved by means of stochastic gradient descent (SGD) algorithm. In the framework of a linear specification of the approximation space, we manage to prove a convergence result for our scheme, under some smallness condition. In practice, in order to be able to treat high-dimensional examples, we employ sparse grid approximation spaces. In the case of periodic coefficients and using pre-wavelet basis functions, we obtain an upper bound on the global complexity of our method. It shows in particular that the curse of dimensionality is tamed in the sense that in order to achieve a root mean squared error of order ${epsilon}$, for a prescribed precision ${epsilon}$, the complexity of the Picard algorithm grows polynomially in ${epsilon}^{-1}$ up to some logarithmic factor $ |log({epsilon})| $ which grows linearly with respect to the PDE dimension. Various numerical results are presented to validate the performance of our method and to compare them with some recent machine learning schemes proposed in Han and al. [20] and Hure and al. [37].
In this paper, we propose forward and backward stochastic differential equations (FBSDEs) based deep neural network (DNN) learning algorithms for the solution of high dimensional quasilinear parabolic partial differential equations (PDEs), which are related to the FBSDEs by the Pardoux-Peng theory. The algorithms rely on a learning process by minimizing the pathwise difference between two discrete stochastic processes, defined by the time discretization of the FBSDEs and the DNN representation of the PDE solutions, respectively. The proposed algorithms are shown to generate DNN solutions for a 100-dimensional Black--Scholes--Barenblatt equation, accurate in a finite region in the solution space, and has a convergence rate similar to that of the Euler--Maruyama discretization used for the FBSDEs. As a result, a Richardson extrapolation technique over time discretizations can be used to enhance the accuracy of the DNN solutions. For time oscillatory solutions, a multiscale DNN is shown to improve the performance of the FBSDE DNN for high frequencies.
The tangential condition was introduced in [Hanke et al., 95] as a sufficient condition for convergence of the Landweber iteration for solving ill-posed problems. In this paper we present a series of time dependent benchmark inverse problems for which we can verify this condition.
In this paper, we propose a fast spectral-Galerkin method for solving PDEs involving integral fractional Laplacian in $mathbb{R}^d$, which is built upon two essential components: (i) the Dunford-Taylor formulation of the fractional Laplacian; and (ii) Fourier-like bi-orthogonal mapped Chebyshev functions (MCFs) as basis functions. As a result, the fractional Laplacian can be fully diagonalised, and the complexity of solving an elliptic fractional PDE is quasi-optimal, i.e., $O((Nlog_2N)^d)$ with $N$ being the number of modes in each spatial direction. Ample numerical tests for various decaying exact solutions show that the convergence of the fast solver perfectly matches the order of theoretical error estimates. With a suitable time-discretization, the fast solver can be directly applied to a large class of nonlinear fractional PDEs. As an example, we solve the fractional nonlinear Schr{o}dinger equation by using the fourth-order time-splitting method together with the proposed MCF-spectral-Galerkin method.
In [2019, Space-time least-squares finite elements for parabolic equations, arXiv:1911.01942] by Fuhrer& Karkulik, well-posedness of a space-time First-Order System Least-Squares formulation of the heat equation was proven. In the present work, this result is generalized to general second order parabolic PDEs with possibly inhomogenoeus boundary conditions, and plain convergence of a standard adaptive finite element method driven by the least-squares estimator is demonstrated. The proof of the latter easily extends to a large class of least-squares formulations.