ترغب بنشر مسار تعليمي؟ اضغط هنا

Multiscale and Nonlocal Learning for PDEs using Densely Connected RNNs

102   0   0.0 ( 0 )
 نشر من قبل Haizhao Yang
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Learning time-dependent partial differential equations (PDEs) that govern evolutionary observations is one of the core challenges for data-driven inference in many fields. In this work, we propose to capture the essential dynamics of numerically challenging PDEs arising in multiscale modeling and simulation -- kinetic equations. These equations are usually nonlocal and contain scales/parameters that vary by several orders of magnitude. We introduce an efficient framework, Densely Connected Recurrent Neural Networks (DC-RNNs), by incorporating a multiscale ansatz and high-order implicit-explicit (IMEX) schemes into RNN structure design to identify analytic representations of multiscale and nonlocal PDEs from discrete-time observations generated from heterogeneous experiments. If present in the observed data, our DC-RNN can capture transport operators, nonlocal projection or collision operators, macroscopic diffusion limit, and other dynamics. We provide numerical results to demonstrate the advantage of our proposed framework and compare it with existing methods.



قيم البحث

اقرأ أيضاً

In this paper, we demonstrate the construction of generalized Rough Polyhamronic Splines (GRPS) within the Bayesian framework, in particular, for multiscale PDEs with rough coefficients. The optimal coarse basis can be derived automatically by the ra ndomization of the original PDEs with a proper prior distribution and the conditional expectation given partial information on edge or derivative measurements. We prove the (quasi)-optimal localization and approximation properties of the obtained bases, and justify the theoretical results with numerical experiments.
127 - Yifan Chen , Thomas Y. Hou 2020
There is an intimate connection between numerical upscaling of multiscale PDEs and scattered data approximation of heterogeneous functions: the coarse variables selected for deriving an upscaled equation (in the former) correspond to the sampled info rmation used for approximation (in the latter). As such, both problems can be thought of as recovering a target function based on some coarse data that are either artificially chosen by an upscaling algorithm, or determined by some physical measurement process. The purpose of this paper is then to study that, under such a setup and for a specific elliptic problem, how the lengthscale of the coarse data, which we refer to as the subsampled lengthscale, influences the accuracy of recovery, given limited computational budgets. Our analysis and experiments identify that, reducing the subsampling lengthscale may improve the accuracy, implying a guiding criterion for coarse-graining or data acquisition in this computationally constrained scenario, especially leading to direct insights for the implementation of the Gamblets method in the numerical homogenization literature. Moreover, reducing the lengthscale to zero may lead to a blow-up of approximation error if the target function does not have enough regularity, suggesting the need for a stronger prior assumption on the target function to be approximated. We introduce a singular weight function to deal with it, both theoretically and numerically. This work sheds light on the interplay of the lengthscale of coarse data, the computational costs, the regularity of the target function, and the accuracy of approximations and numerical simulations.
In this work, we review the connection between the subjects of homogenization and nonlocal modeling and discuss the relevant computational issues. By further exploring this connection, we hope to promote the cross fertilization of ideas from the diff erent research fronts. We illustrate how homogenization may help characterizing the nature and the form of nonlocal interactions hypothesized in nonlocal models. We also offer some perspective on how studies of nonlocality may help the development of more effective numerical methods for homogenization.
In this paper, we introduce a multiscale framework based on adaptive edge basis functions to solve second-order linear elliptic PDEs with rough coefficients. One of the main results is that we prove the proposed multiscale method achieves nearly expo nential convergence in the approximation error with respect to the computational degrees of freedom. Our strategy is to perform an energy orthogonal decomposition of the solution space into a coarse scale component comprising $a$-harmonic functions in each element of the mesh, and a fine scale component named the bubble part that can be computed locally and efficiently. The coarse scale component depends entirely on function values on edges. Our approximation on each edge is made in the Lions-Magenes space $H_{00}^{1/2}(e)$, which we will demonstrate to be a natural and powerful choice. We construct edge basis functions using local oversampling and singular value decomposition. When local information of the right-hand side is adaptively incorporated into the edge basis functions, we prove a nearly exponential convergence rate of the approximation error. Numerical experiments validate and extend our theoretical analysis; in particular, we observe no obvious degradation in accuracy for high-contrast media problems.
We introduce a simple, rigorous, and unified framework for solving nonlinear partial differential equations (PDEs), and for solving inverse problems (IPs) involving the identification of parameters in PDEs, using the framework of Gaussian processes. The proposed approach: (1) provides a natural generalization of collocation kernel methods to nonlinear PDEs and IPs; (2) has guaranteed convergence for a very general class of PDEs, and comes equipped with a path to compute error bounds for specific PDE approximations; (3) inherits the state-of-the-art computational complexity of linear solvers for dense kernel matrices. The main idea of our method is to approximate the solution of a given PDE as the maximum a posteriori (MAP) estimator of a Gaussian process conditioned on solving the PDE at a finite number of collocation points. Although this optimization problem is infinite-dimensional, it can be reduced to a finite-dimensional one by introducing additional variables corresponding to the values of the derivatives of the solution at collocation points; this generalizes the representer theorem arising in Gaussian process regression. The reduced optimization problem has the form of a quadratic objective function subject to nonlinear constraints; it is solved with a variant of the Gauss--Newton method. The resulting algorithm (a) can be interpreted as solving successive linearizations of the nonlinear PDE, and (b) in practice is found to converge in a small number of iterations (2 to 10), for a wide range of PDEs. Most traditional approaches to IPs interleave parameter updates with numerical solution of the PDE; our algorithm solves for both parameter and PDE solution simultaneously. Experiments on nonlinear elliptic PDEs, Burgers equation, a regularized Eikonal equation, and an IP for permeability identification in Darcy flow illustrate the efficacy and scope of our framework.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا