No Arabic abstract
The understanding of nonlinear, high dimensional flows, e.g, atmospheric and ocean flows, is critical to address the impacts of global climate change. Data Assimilation techniques combine physical models and observational data, often in a Bayesian framework, to predict the future state of the model and the uncertainty in this prediction. Inherent in these systems are noise (Gaussian and non-Gaussian), nonlinearity, and high dimensionality that pose challenges to making accurate predictions. To address these issues we investigate the use of both model and data dimension reduction based on techniques including Assimilation in Unstable Subspaces, Proper Orthogonal Decomposition, and Dynamic Mode Decomposition. Algorithms that take advantage of projected physical and data models may be combined with Data Analysis techniques such as Ensemble Kalman Filter and Particle Filter variants. The projected Data Assimilation techniques are developed for the optimal proposal particle filter and applied to the Lorenz96 and Shallow Water Equations to test the efficacy of our techniques in high dimensional, nonlinear systems.
The shallow water equations (SWE) are a widely used model for the propagation of surface waves on the oceans. We consider the problem of optimally determining the initial conditions for the one-dimensional SWE in an unbounded domain from a small set of observations of the sea surface height. In the linear case we prove a theorem that gives sufficient conditions for convergence to the true initial conditions. At least two observation points must be used and at least one pair of observation points must be spaced more closely than half the effective minimum wavelength of the energy spectrum of the initial conditions. This result also applies to the linear wave equation. Our analysis is confirmed by numerical experiments for both the linear and nonlinear SWE data assimilation problems. These results show that convergence rates improve with increasing numbers of observation points and that at least three observation points are required for the practically useful results. Better results are obtained for the nonlinear equations provided more than two observation points are used. This paper is a first step in understanding the conditions for observability of the SWE for small numbers of observation points in more physically realistic settings.
We propose a general --- i.e., independent of the underlying equation --- registration method for parameterized Model Order Reduction. Given the spatial domain $Omega subset mathbb{R}^d$ and a set of snapshots ${ u^k }_{k=1}^{n_{rm train}}$ over $Omega$ associated with $n_{rm train}$ values of the model parameters $mu^1,ldots, mu^{n_{rm train}} in mathcal{P}$, the algorithm returns a parameter-dependent bijective mapping $boldsymbol{Phi}: Omega times mathcal{P} to mathbb{R}^d$: the mapping is designed to make the mapped manifold ${ u_{mu} circ boldsymbol{Phi}_{mu}: , mu in mathcal{P} }$ more suited for linear compression methods. We apply the registration procedure, in combination with a linear compression method, to devise low-dimensional representations of solution manifolds with slowly-decaying Kolmogorov $N$-widths; we also consider the application to problems in parameterized geometries. We present a theoretical result to show the mathematical rigor of the registration procedure. We further present numerical results for several two-dimensional problems, to empirically demonstrate the effectivity of our proposal.
We discover that deep ReLU neural network classifiers can see a low-dimensional Riemannian manifold structure on data. Such structure comes via the local data matrix, a variation of the Fisher information matrix, where the role of the model parameters is taken by the data variables. We obtain a foliation of the data domain and we show that the dataset on which the model is trained lies on a leaf, the data leaf, whose dimension is bounded by the number of classification labels. We validate our results with some experiments with the MNIST dataset: paths on the data leaf connect valid images, while other leaves cover noisy images.
In this paper we consider a tank containing fluid and we want to estimate the horizontal currents when the fluid surface height is measured. The fluid motion is described by shallow water equations in two horizontal dimensions. We build a simple non-linear observer which takes advantage of the symmetries of fluid dynamics laws. As a result its structure is based on convolutions with smooth isotropic kernels, and the observer is remarkably robust to noise. We prove the convergence of the observer around a steady-state. In numerical applications local exponential convergence is expected. The observer is also applied to the problem of predicting the ocean circulation. Realistic simulations illustrate the relevance of the approach compared with some standard oceanography techniques.
Recent research reveals that deep learning is an effective way of solving high dimensional Hamilton-Jacobi-Bellman equations. The resulting feedback control law in the form of a neural network is computationally efficient for real-time applications of optimal control. A critical part of this design method is to generate data for training the neural network and validating its accuracy. In this paper, we provide a survey of existing algorithms that can be used to generate data. All the algorithms surveyed in this paper are causality-free, i.e., the solution at a point is computed without using the value of the function at any other points. At the end of the paper, an illustrative example of optimal feedback design using deep learning is given.