ترغب بنشر مسار تعليمي؟ اضغط هنا

In this paper, we consider the density estimation problem associated with the stationary measure of ergodic It^o diffusions from a discrete-time series that approximate the solutions of the stochastic differential equations. To take an advantage of t he characterization of density function through the stationary solution of a parabolic-type Fokker-Planck PDE, we proceed as follows. First, we employ deep neural networks to approximate the drift and diffusion terms of the SDE by solving appropriate supervised learning tasks. Subsequently, we solve a steady-state Fokker-Plank equation associated with the estimated drift and diffusion coefficients with a neural-network-based least-squares method. We establish the convergence of the proposed scheme under appropriate mathematical assumptions, accounting for the generalization errors induced by regressing the drift and diffusion coefficients, and the PDE solvers. This theoretical study relies on a recent perturbation theory of Markov chain result that shows a linear dependence of the density estimation to the error in estimating the drift term, and generalization error results of nonparametric regression and of PDE regression solution obtained with neural-network models. The effectiveness of this method is reflected by numerical simulations of a two-dimensional Students t distribution and a 20-dimensional Langevin dynamics.
91 - Di Qi , John Harlim 2021
We propose a Machine Learning (ML) non-Markovian closure modeling framework for accurate predictions of statistical responses of turbulent dynamical systems subjected to external forcings. One of the difficulties in this statistical closure problem i s the lack of training data, which is a configuration that is not desirable in supervised learning with neural network models. In this study with the 40-dimensional Lorenz-96 model, the shortage of data (in temporal) is due to the stationarity of the statistics beyond the decorrelation time, thus, the only informative content in the training data is on short-time transient statistics. We adopted a unified closure framework on various truncation regimes, including and excluding the detailed dynamical equations for the variances. The closure frameworks employ a Long-Short-Term-Memory architecture to represent the higher-order unresolved statistical feedbacks with careful consideration to account for the intrinsic instability yet producing stable long-time predictions. We found that this unified agnostic ML approach performs well under various truncation scenarios. Numerically, the ML closure model can accurately predict the long-time statistical responses subjected to various time-dependent external forces that are not (and maximum forcing amplitudes that are relatively larger than those) in the training dataset.
This paper develops manifold learning techniques for the numerical solution of PDE-constrained Bayesian inverse problems on manifolds with boundaries. We introduce graphical Matern-type Gaussian field priors that enable flexible modeling near the bou ndaries, representing boundary values by superposition of harmonic functions with appropriate Dirichlet boundary conditions. We also investigate the graph-based approximation of forward models from PDE parameters to observed quantities. In the construction of graph-based prior and forward models, we leverage the ghost point diffusion map algorithm to approximate second-order elliptic operators with classical boundary conditions. Numerical results validate our graph-based approach and demonstrate the need to design prior covariance models that account for boundary conditions.
This paper proposes a mesh-free computational framework and machine learning theory for solving elliptic PDEs on unknown manifolds, identified with point clouds, based on diffusion maps (DM) and deep learning. The PDE solver is formulated as a superv ised learning task to solve a least-squares regression problem that imposes an algebraic equation approximating a PDE (and boundary conditions if applicable). This algebraic equation involves a graph-Laplacian type matrix obtained via DM asymptotic expansion, which is a consistent estimator of second-order elliptic differential operators. The resulting numerical method is to solve a highly non-convex empirical risk minimization problem subjected to a solution from a hypothesis space of neural-network type functions. In a well-posed elliptic PDE setting, when the hypothesis space consists of feedforward neural networks with either infinite width or depth, we show that the global minimizer of the empirical loss function is a consistent solution in the limit of large training data. When the hypothesis space is a two-layer neural network, we show that for a sufficiently large width, the gradient descent method can identify a global minimizer of the empirical loss function. Supporting numerical examples demonstrate the convergence of the solutions and the effectiveness of the proposed solver in avoiding numerical issues that hampers the traditional approach when a large data set becomes available, e.g., large matrix inversion.
In this paper, we extend the class of kernel methods, the so-called diffusion maps (DM) and ghost point diffusion maps (GPDM), to solve the time-dependent advection-diffusion PDE on unknown smooth manifolds without and with boundaries. The core idea is to directly approximate the spatial components of the differential operator on the manifold with a local integral operator and combine it with the standard implicit time difference scheme. When the manifold has a boundary, a simplified version of the GPDM approach is used to overcome the bias of the integral approximation near the boundary. The Monte-Carlo discretization of the integral operator over the point cloud data gives rise to a mesh-free formulation that is natural for randomly distributed points, even when the manifold is embedded in high-dimensional ambient space. Here, we establish the convergence of the proposed solver on appropriate topologies, depending on the distribution of point cloud data and boundary type. We provide numerical results to validate the convergence results on various examples that involve simple geometry and an unknown manifold. Additionally, we also found positive results in solving the one-dimensional viscous Burgers equation where GPDM is adopted with a pseudo-spectral Galerkin framework to approximate nonlinear advection term.
This paper studies the theoretical underpinnings of machine learning of ergodic It^o diffusions. The objective is to understand the convergence properties of the invariant statistics when the underlying system of stochastic differential equations (SD Es) is empirically estimated with a supervised regression framework. Using the perturbation theory of ergodic Markov chains and the linear response theory, we deduce a linear dependence of the errors of one-point and two-point invariant statistics on the error in the learning of the drift and diffusion coefficients. More importantly, our study shows that the usual $L^2$-norm characterization of the learning generalization error is insufficient for achieving this linear dependence result. We find that sufficient conditions for such a linear dependence result are through learning algorithms that produce a uniformly Lipschitz and consistent estimator in the hypothesis space that retains certain characteristics of the drift coefficients, such as the usual linear growth condition that guarantees the existence of solutions of the underlying SDEs. We examine these conditions on two well-understood learning algorithms: the kernel-based spectral regression method and the shallow random neural networks with the ReLU activation function.
A simple and efficient Bayesian machine learning (BML) training and forecasting algorithm, which exploits only a 20-year short observational time series and an approximate prior model, is developed to predict the Ni~no 3 sea surface temperature (SST) index. The BML forecast significantly outperforms model-based ensemble predictions and standard machine learning forecasts. Even with a simple feedforward neural network, the BML forecast is skillful for 9.5 months. Remarkably, the BML forecast overcomes the spring predictability barrier to a large extent: the forecast starting from spring remains skillful for nearly 10 months. The BML algorithm can also effectively utilize multiscale features: the BML forecast of SST using SST, thermocline, and wind burst improves on the BML forecast using just SST by at least 2 months. Finally, the BML algorithm also reduces the forecast uncertainty of neural networks and is robust to input perturbations.
In this paper, we extend the class of kernel methods, the so-called diffusion maps (DM), and its local kernel variants, to approximate second-order differential operators defined on smooth manifolds with boundaries that naturally arise in elliptic PD E models. To achieve this goal, we introduce the Ghost Point Diffusion Maps (GPDM) estimator on an extended manifold, identified by the set of point clouds on the unknown original manifold together with a set of ghost points, specified along the estimated tangential direction at the sampled points at the boundary. The resulting GPDM estimator restricts the standard DM matrix to a set of extrapolation equations that estimates the function values at the ghost points. This adjustment is analogous to the classical ghost point method in finite-difference scheme for solving PDEs on flat domain. As opposed to the classical DM which diverges near the boundary, the proposed GPDM estimator converges pointwise even near the boundary. Applying the consistent GPDM estimator to solve the well-posed elliptic PDEs with classical boundary conditions (Dirichlet, Neumann, and Robin), we establish the convergence of the approximate solution under appropriate smoothness assumptions. We numerically validate the proposed mesh-free PDE solver on various problems defined on simple sub-manifolds embedded in Euclidean spaces as well as on an unknown manifold. Numerically, we also found that the GPDM is more accurate compared to DM in solving elliptic eigenvalue problems on bounded smooth manifolds.
This short review describes mathematical techniques for statistical analysis and prediction in dynamical systems. Two problems are discussed, namely (i) the supervised learning problem of forecasting the time evolution of an observable under potentia lly incomplete observations at forecast initialization; and (ii) the unsupervised learning problem of identification of observables of the system with a coherent dynamical evolution. We discuss how ideas from from operator-theoretic ergodic theory combined with statistical learning theory provide an effective route to address these problems, leading to methods well-adapted to handle nonlinear dynamics, with convergence guarantees as the amount of training data increases.
This article presents a general framework for recovering missing dynamical systems using available data and machine learning techniques. The proposed framework reformulates the prediction problem as a supervised learning problem to approximate a map that takes the memories of the resolved and identifiable unresolved variables to the missing components in the resolved dynamics. We demonstrate the effectiveness of the proposed framework with a theoretical guarantee of a path-wise convergence of the resolved variables up to finite time and numerical tests on prototypical models in various scientific domains. These include the 57-mode barotropic stress models with multiscale interactions that mimic the blocked and unblocked patterns observed in the atmosphere, the nonlinear Schr{o}dinger equation which found many applications in physics such as optics and Bose-Einstein-Condense, the Kuramoto-Sivashinsky equation which spatiotemporal chaotic pattern formation models trapped ion mode in plasma and phase dynamics in reaction-diffusion systems. While many machine learning techniques can be used to validate the proposed framework, we found that recurrent neural networks outperform kernel regression methods in terms of recovering the trajectory of the resolved components and the equilibrium one-point and two-point statistics. This superb performance suggests that recurrent neural networks are an effective tool for recovering the missing dynamics that involves approximation of high-dimensional functions.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا