ترغب بنشر مسار تعليمي؟ اضغط هنا

Machine Learning of Partial Differential Equations from Noise Data

195   0   0.0 ( 0 )
 نشر من قبل Wenbo Cao
 تاريخ النشر 2020
  مجال البحث هندسة إلكترونية
والبحث باللغة English




اسأل ChatGPT حول البحث

Machine learning of partial differential equations from data is a potential breakthrough to solve the lack of physical equations in complex dynamic systems, but because numerical differentiation is ill-posed to noise data, noise has become the biggest obstacle in the application of partial differential equation identification method. To overcome this problem, we propose Frequency Domain Identification method based on Fourier transforms, which effectively eliminates the influence of noise by using the low frequency component of frequency domain data to identify partial differential equations in frequency domain. We also propose a new sparse identification criterion, which can accurately identify the terms in the equation from low signal-to-noise ratio data. Through identifying a variety of canonical equations spanning a number of scientific domains, the proposed method is proved to have high accuracy and robustness for equation structure and parameters identification for low signal-to-noise ratio data. The method provides a promising technique to discover potential partial differential equations from noisy experimental data.

قيم البحث

اقرأ أيضاً

We develop a framework for estimating unknown partial differential equations from noisy data, using a deep learning approach. Given noisy samples of a solution to an unknown PDE, our method interpolates the samples using a neural network, and extract s the PDE by equating derivatives of the neural network approximation. Our method applies to PDEs which are linear combinations of user-defined dictionary functions, and generalizes previous methods that only consider parabolic PDEs. We introduce a regularization scheme that prevents the function approximation from overfitting the data and forces it to be a solution of the underlying PDE. We validate the model on simulated data generated by the known PDEs and added Gaussian noise, and we study our method under different levels of noise. We also compare the error of our method with a Cramer-Rao lower bound for an ordinary differential equation. Our results indicate that our method outperforms other methods in estimating PDEs, especially in the low signal-to-noise regime.
We investigate methods for learning partial differential equation (PDE) models from spatiotemporal data under biologically realistic levels and forms of noise. Recent progress in learning PDEs from data have used sparse regression to select candidate terms from a denoised set of data, including approximated partial derivatives. We analyze the performance in utilizing previous methods to denoise data for the task of discovering the governing system of partial differential equations (PDEs). We also develop a novel methodology that uses artificial neural networks (ANNs) to denoise data and approximate partial derivatives. We test the methodology on three PDE models for biological transport, i.e., the advection-diffusion, classical Fisher-KPP, and nonlinear Fisher-KPP equations. We show that the ANN methodology outperforms previous denoising methods, including finite differences and polynomial regression splines, in the ability to accurately approximate partial derivatives and learn the correct PDE model.
A statistical learning approach for parametric PDEs related to Uncertainty Quantification is derived. The method is based on the minimization of an empirical risk on a selected model class and it is shown to be applicable to a broad range of problems . A general unified convergence analysis is derived, which takes into account the approximation and the statistical errors. By this, a combination of theoretical results from numerical analysis and statistics is obtained. Numerical experiments illustrate the performance of the method with the model class of hierarchical tensors.
96 - H. S. Tang , L. Li , M. Grossberg 2020
As further progress in the accurate and efficient computation of coupled partial differential equations (PDEs) becomes increasingly difficult, it has become highly desired to develop new methods for such computation. In deviation from conventional ap proaches, this short communication paper explores a computational paradigm that couples numerical solutions of PDEs via machine-learning (ML) based methods, together with a preliminary study on the paradigm. Particularly, it solves PDEs in subdomains as in a conventional approach but develops and trains artificial neural networks (ANN) to couple the PDEs solutions at their interfaces, leading to solutions to the PDEs in the whole domains. The concepts and algorithms for the ML coupling are discussed using coupled Poisson equations and coupled advection-diffusion equations. Preliminary numerical examples illustrate the feasibility and performance of the ML coupling. Although preliminary, the results of this exploratory study indicate that the ML paradigm is promising and deserves further research.
179 - Christian Beck , Weinan E , 2017
High-dimensional partial differential equations (PDE) appear in a number of models from the financial industry, such as in derivative pricing models, credit valuation adjustment (CVA) models, or portfolio optimization models. The PDEs in such applica tions are high-dimensional as the dimension corresponds to the number of financial assets in a portfolio. Moreover, such PDEs are often fully nonlinear due to the need to incorporate certain nonlinear phenomena in the model such as default risks, transaction costs, volatility uncertainty (Knightian uncertainty), or trading constraints in the model. Such high-dimensional fully nonlinear PDEs are exceedingly difficult to solve as the computational effort for standard approximation methods grows exponentially with the dimension. In this work we propose a new method for solving high-dimensional fully nonlinear second-order PDEs. Our method can in particular be used to sample from high-dimensional nonlinear expectations. The method is based on (i) a connection between fully nonlinear second-order PDEs and second-order backward stochastic differential equations (2BSDEs), (ii) a merged formulation of the PDE and the 2BSDE problem, (iii) a temporal forward discretization of the 2BSDE and a spatial approximation via deep neural nets, and (iv) a stochastic gradient descent-type optimization procedure. Numerical results obtained using ${rm T{small ENSOR}F{small LOW}}$ in ${rm P{small YTHON}}$ illustrate the efficiency and the accuracy of the method in the cases of a $100$-dimensional Black-Scholes-Barenblatt equation, a $100$-dimensional Hamilton-Jacobi-Bellman equation, and a nonlinear expectation of a $ 100 $-dimensional $ G $-Brownian motion.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا