No Arabic abstract
We introduce a method-of-lines formulation of the closest point method, a numerical technique for solving partial differential equations (PDEs) defined on surfaces. This is an embedding method, which uses an implicit representation of the surface in a band containing the surface. We define a modified equation in the band, obtained in a straightforward way from the original evolution PDE, and show that the solutions of this equation are consistent with those of the surface equation. The resulting system can then be solved with standard implicit or explicit time-stepping schemes, and the solutions in the band can be restricted to the surface. Our derivation generalizes existing formulations of the closest point method and is amenable to standard convergence analysis.
At present, deep learning based methods are being employed to resolve the computational challenges of high-dimensional partial differential equations (PDEs). But the computation of the high order derivatives of neural networks is costly, and high order derivatives lack robustness for training purposes. We propose a novel approach to solving PDEs with high order derivatives by simultaneously approximating the function value and derivatives. We introduce intermediate variables to rewrite the PDEs into a system of low order differential equations as what is done in the local discontinuous Galerkin method. The intermediate variables and the solutions to the PDEs are simultaneously approximated by a multi-output deep neural network. By taking the residual of the system as a loss function, we can optimize the network parameters to approximate the solution. The whole process relies on low order derivatives. Numerous numerical examples are carried out to demonstrate that our local deep learning is efficient, robust, flexible, and is particularly well-suited for high-dimensional PDEs with high order derivatives.
We investigate solving partial integro-differential equations (PIDEs) using unsupervised deep learning in this paper. To price options, assuming underlying processes follow Levy processes, we require to solve PIDEs. In supervised deep learning, pre-calculated labels are used to train neural networks to fit the solution of the PIDE. In an unsupervised deep learning, neural networks are employed as the solution, and the derivatives and the integrals in the PIDE are calculated based on the neural network. By matching the PIDE and its boundary conditions, the neural network gives an accurate solution of the PIDE. Once trained, it would be fast for calculating options values as well as option Greeks.
This paper presents a novel semi-analytical collocation method to solve multi-term variable-order time fractional partial differential equations (VOTFPDEs). In the proposed method it employs the Fourier series expansion for spatial discretization, which transforms the original multi-term VOTFPDEs into a sequence of multi-term variable-order time fractional ordinary differential equations (VOTFODEs). Then these VOTFODEs can be solved by using the recent-developed backward substitution method. Several numerical examples verify the accuracy and efficiency of the proposed numerical approach in the solution of multi-term VOTFPDEs.
Solving partial differential equations (PDEs) by parametrizing its solution by neural networks (NNs) has been popular in the past a few years. However, different types of loss functions can be proposed for the same PDE. For the Poisson equation, the loss function can be based on the weak formulation of energy variation or the least squares method, which leads to the deep Ritz model and deep Galerkin model, respectively. But loss landscapes from these different models give arise to different practical performance of training the NN parameters. To investigate and understand such practical differences, we propose to compare the loss landscapes of these models, which are both high dimensional and highly non-convex. In such settings, the roughness is more important than the traditional eigenvalue analysis to describe the non-convexity. We contribute to the landscape comparisons by proposing a roughness index to scientifically and quantitatively describe the heuristic concept of roughness of landscape around minimizers. This index is based on random projections and the variance of (normalized) total variation for one dimensional projected functions, and it is efficient to compute. A large roughness index hints an oscillatory landscape profile as a severe challenge for the first order optimization method. We apply this index to the two models for the Poisson equation and our empirical results reveal a consistent general observation that the landscapes from the deep Galerkin method around its local minimizers are less rough than the deep Ritz method, which supports the observed gain in accuracy of the deep Galerkin method.
As further progress in the accurate and efficient computation of coupled partial differential equations (PDEs) becomes increasingly difficult, it has become highly desired to develop new methods for such computation. In deviation from conventional approaches, this short communication paper explores a computational paradigm that couples numerical solutions of PDEs via machine-learning (ML) based methods, together with a preliminary study on the paradigm. Particularly, it solves PDEs in subdomains as in a conventional approach but develops and trains artificial neural networks (ANN) to couple the PDEs solutions at their interfaces, leading to solutions to the PDEs in the whole domains. The concepts and algorithms for the ML coupling are discussed using coupled Poisson equations and coupled advection-diffusion equations. Preliminary numerical examples illustrate the feasibility and performance of the ML coupling. Although preliminary, the results of this exploratory study indicate that the ML paradigm is promising and deserves further research.