ترغب بنشر مسار تعليمي؟ اضغط هنا

KO-PDE: Kernel Optimized Discovery of Partial Differential Equations with Varying Coefficients

73   0   0.0 ( 0 )
 نشر من قبل Yingtao Luo
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Partial differential equations (PDEs) fitting scientific data can represent physical laws with explainable mechanisms for various mathematically-oriented subjects. Most natural dynamics are expressed by PDEs with varying coefficients (PDEs-VC), which highlights the importance of PDE discovery. Previous algorithms can discover some simple instances of PDEs-VC but fail in the discovery of PDEs with coefficients of higher complexity, as a result of coefficient estimation inaccuracy. In this paper, we propose KO-PDE, a kernel optimized regression method that incorporates the kernel density estimation of adjacent coefficients to reduce the coefficient estimation error. KO-PDE can discover PDEs-VC on which previous baselines fail and is more robust against inevitable noise in data. In experiments, the PDEs-VC of seven challenging spatiotemporal scientific datasets in fluid dynamics are all discovered by KO-PDE, while the three baselines render false results in most cases. With state-of-the-art performance, KO-PDE sheds light on the automatic description of natural phenomenons using discovered PDEs in the real world.



قيم البحث

اقرأ أيضاً

We present a deep learning algorithm for the numerical solution of parametric families of high-dimensional linear Kolmogorov partial differential equations (PDEs). Our method is based on reformulating the numerical approximation of a whole family of Kolmogorov PDEs as a single statistical learning problem using the Feynman-Kac formula. Successful numerical experiments are presented, which empirically confirm the functionality and efficiency of our proposed algorithm in the case of heat equations and Black-Scholes option pricing models parametrized by affine-linear coefficient functions. We show that a single deep neural network trained on simulated data is capable of learning the solution functions of an entire family of PDEs on a full space-time region. Most notably, our numerical observations and theoretical results also demonstrate that the proposed method does not suffer from the curse of dimensionality, distinguishing it from almost all standard numerical methods for PDEs.
86 - Sahani Pathiraja 2020
The aim of this paper is to obtain convergence in mean in the uniform topology of piecewise linear approximations of Stochastic Differential Equations (SDEs) with $C^1$ drift and $C^2$ diffusion coefficients with uniformly bounded derivatives. Conver gence analyses for such Wong-Zakai approximations most often assume that the coefficients of the SDE are uniformly bounded. Almost sure convergence in the unbounded case can be obtained using now standard rough path techniques, although $L^q$ convergence appears yet to be established and is of importance for several applications involving Monte-Carlo approximations. We consider $L^2$ convergence in the unbounded case using a combination of traditional stochastic analysis and rough path techniques. We expect our proof technique extend to more general piecewise smooth approximations.
Recently, researchers have utilized neural networks to accurately solve partial differential equations (PDEs), enabling the mesh-free method for scientific computation. Unfortunately, the network performance drops when encountering a high nonlinearit y domain. To improve the generalizability, we introduce the novel approach of employing multi-task learning techniques, the uncertainty-weighting loss and the gradients surgery, in the context of learning PDE solutions. The multi-task scheme exploits the benefits of learning shared representations, controlled by cross-stitch modules, between multiple related PDEs, which are obtainable by varying the PDE parameterization coefficients, to generalize better on the original PDE. Encouraging the network pay closer attention to the high nonlinearity domain regions that are more challenging to learn, we also propose adversarial training for generating supplementary high-loss samples, similarly distributed to the original training distribution. In the experiments, our proposed methods are found to be effective and reduce the error on the unseen data points as compared to the previous approaches in various PDE examples, including high-dimensional stochastic PDEs.
Quantum computers can produce a quantum encoding of the solution of a system of differential equations exponentially faster than a classical algorithm can produce an explicit description. However, while high-precision quantum algorithms for linear or dinary differential equations are well established, the best previous quantum algorithms for linear partial differential equations (PDEs) have complexity $mathrm{poly}(1/epsilon)$, where $epsilon$ is the error tolerance. By developing quantum algorithms based on adaptive-order finite difference methods and spectral methods, we improve the complexity of quantum algorithms for linear PDEs to be $mathrm{poly}(d, log(1/epsilon))$, where $d$ is the spatial dimension. Our algorithms apply high-precision quantum linear system algorithms to systems whose condition numbers and approximation errors we bound. We develop a finite difference algorithm for the Poisson equation and a spectral algorithm for more general second-order elliptic equations.
74 - Hao Xu , Dongxiao Zhang 2021
Data-driven discovery of partial differential equations (PDEs) has achieved considerable development in recent years. Several aspects of problems have been resolved by sparse regression-based and neural network-based methods. However, the performance s of existing methods lack stability when dealing with complex situations, including sparse data with high noise, high-order derivatives and shock waves, which bring obstacles to calculating derivatives accurately. Therefore, a robust PDE discovery framework, called the robust deep learning-genetic algorithm (R-DLGA), that incorporates the physics-informed neural network (PINN), is proposed in this work. In the framework, a preliminary result of potential terms provided by the deep learning-genetic algorithm is added into the loss function of the PINN as physical constraints to improve the accuracy of derivative calculation. It assists to optimize the preliminary result and obtain the ultimately discovered PDE by eliminating the error compensation terms. The stability and accuracy of the proposed R-DLGA in several complex situations are examined for proof-and-concept, and the results prove that the proposed framework is able to calculate derivatives accurately with the optimization of PINN and possesses surprising robustness to complex situations, including sparse data with high noise, high-order derivatives, and shock waves.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا