Do you want to publish a course? Click here

Active operator inference for learning low-dimensional dynamical-system models from noisy data

102   0   0.0 ( 0 )
 Added by Wayne Isaac Uy
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

Noise poses a challenge for learning dynamical-system models because already small variations can distort the dynamics described by trajectory data. This work builds on operator inference from scientific machine learning to infer low-dimensional models from high-dimensional state trajectories polluted with noise. The presented analysis shows that, under certain conditions, the inferred operators are unbiased estimators of the well-studied projection-based reduced operators from traditional model reduction. Furthermore, the connection between operator inference and projection-based model reduction enables bounding the mean-squared errors of predictions made with the learned models with respect to traditional reduced models. The analysis also motivates an active operator inference approach that judiciously samples high-dimensional trajectories with the aim of achieving a low mean-squared error by reducing the effect of noise. Numerical experiments with high-dimensional linear and nonlinear state dynamics demonstrate that predictions obtained with active operator inference have orders of magnitude lower mean-squared errors than operator inference with traditional, equidistantly sampled trajectory data.

rate research

Read More

Accurate approximation of scalar-valued functions from sample points is a key task in computational science. Recently, machine learning with Deep Neural Networks (DNNs) has emerged as a promising tool for scientific computing, with impressive results achieved on problems where the dimension of the data or problem domain is large. This work broadens this perspective, focusing on approximating functions that are Hilbert-valued, i.e. take values in a separable, but typically infinite-dimensional, Hilbert space. This arises in science and engineering problems, in particular those involving solution of parametric Partial Differential Equations (PDEs). Such problems are challenging: 1) pointwise samples are expensive to acquire, 2) the function domain is high dimensional, and 3) the range lies in a Hilbert space. Our contributions are twofold. First, we present a novel result on DNN training for holomorphic functions with so-called hidden anisotropy. This result introduces a DNN training procedure and full theoretical analysis with explicit guarantees on error and sample complexity. The error bound is explicit in three key errors occurring in the approximation procedure: the best approximation, measurement, and physical discretization errors. Our result shows that there exists a procedure (albeit non-standard) for learning Hilbert-valued functions via DNNs that performs as well as, but no better than current best-in-class schemes. It gives a benchmark lower bound for how well DNNs can perform on such problems. Second, we examine whether better performance can be achieved in practice through different types of architectures and training. We provide preliminary numerical results illustrating practical performance of DNNs on parametric PDEs. We consider different parameters, modifying the DNN architecture to achieve better and competitive results, comparing these to current best-in-class schemes.
Noisy labeled data is more a norm than a rarity for self-generated content that is continuously published on the web and social media. Due to privacy concerns and governmental regulations, such a data stream can only be stored and used for learning purposes in a limited duration. To overcome the noise in this on-line scenario we propose QActor which novel combines: the selection of supposedly clean samples via quality models and actively querying an oracle for the most informative true labels. While the former can suffer from low data volumes of on-line scenarios, the latter is constrained by the availability and costs of human experts. QActor swiftly combines the merits of quality models for data filtering and oracle queries for cleaning the most informative data. The objective of QActor is to leverage the stringent oracle budget to robustly maximize the learning accuracy. QActor explores various strategies combining different query allocations and uncertainty measures. A central feature of QActor is to dynamically adjust the query limit according to the learning loss for each data batch. We extensively evaluate different image datasets fed into the classifier that can be standard machine learning (ML) models or deep neural networks (DNN) with noise label ratios ranging between 30% and 80%. Our results show that QActor can nearly match the optimal accuracy achieved using only clean data at the cost of at most an additional 6% of ground truth data from the oracle.
This work presents a nonintrusive physics-preserving method to learn reduced-order models (ROMs) of Hamiltonian systems. Traditional intrusive projection-based model reduction approaches utilize symplectic Galerkin projection to construct Hamiltonian reduced models by projecting Hamiltons equations of the full model onto a symplectic subspace. This symplectic projection requires complete knowledge about the full model operators and full access to manipulate the computer code. In contrast, the proposed Hamiltonian operator inference approach embeds the physics into the operator inference framework to develop a data-driven model reduction method that preserves the underlying symplectic structure. Our method exploits knowledge of the Hamiltonian functional to define and parametrize a Hamiltonian ROM form which can then be learned from data projected via symplectic projectors. The proposed method is `gray-box in that it utilizes knowledge of the Hamiltonian structure at the partial differential equation level, as well as knowledge of spatially local components in the system. However, it does not require access to computer code, only data to learn the models. Our numerical results demonstrate Hamiltonian operator inference on a linear wave equation, the cubic nonlinear Schr{o}dinger equation, and a nonpolynomial sine-Gordon equation. Accurate long-time predictions far outside the training time interval for nonlinear examples illustrate the generalizability of our learned models.
The increasing availability of data presents an opportunity to calibrate unknown parameters which appear in complex models of phenomena in the biomedical, physical and social sciences. However, model complexity often leads to parameter-to-data maps which are expensive to evaluate and are only available through noisy approximations. This paper is concerned with the use of interacting particle systems for the solution of the resulting inverse problems for parameters. Of particular interest is the case where the available forward model evaluations are subject to rapid fluctuations, in parameter space, superimposed on the smoothly varying large scale parametric structure of interest. Multiscale analysis is used to study the behaviour of interacting particle system algorithms when such rapid fluctuations, which we refer to as noise, pollute the large scale parametric dependence of the parameter-to-data map. Ensemble Kalman methods (which are derivative-free) and Langevin-based methods (which use the derivative of the parameter-to-data map) are compared in this light. The ensemble Kalman methods are shown to behave favourably in the presence of noise in the parameter-to-data map, whereas Langevin methods are adversely affected. On the other hand, Langevin methods have the correct equilibrium distribution in the setting of noise-free forward models, whilst ensemble Kalman methods only provide an uncontrolled approximation, except in the linear case. Therefore a new class of algorithms, ensemble Gaussian process samplers, which combine the benefits of both ensemble Kalman and Langevin methods, are introduced and shown to perform favourably.
Active Learning is essential for more label-efficient deep learning. Bayesian Active Learning has focused on BALD, which reduces model parameter uncertainty. However, we show that BALD gets stuck on out-of-distribution or junk data that is not relevant for the task. We examine a novel *Expected Predictive Information Gain (EPIG)* to deal with distribution shifts of the pool set. EPIG reduces the uncertainty of *predictions* on an unlabelled *evaluation set* sampled from the test data distribution whose distribution might be different to the pool set distribution. Based on this, our new EPIG-BALD acquisition function for Bayesian Neural Networks selects samples to improve the performance on the test data distribution instead of selecting samples that reduce model uncertainty everywhere, including for out-of-distribution regions with low density in the test data distribution. Our method outperforms state-of-the-art Bayesian active learning methods on high-dimensional datasets and avoids out-of-distribution junk data in cases where current state-of-the-art methods fail.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا