ترغب بنشر مسار تعليمي؟ اضغط هنا

A Langevinized Ensemble Kalman Filter for Large-Scale Static and Dynamic Learning

104   0   0.0 ( 0 )
 نشر من قبل Peiyi Zhang
 تاريخ النشر 2021
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

The Ensemble Kalman Filter (EnKF) has achieved great successes in data assimilation in atmospheric and oceanic sciences, but its failure in convergence to the right filtering distribution precludes its use for uncertainty quantification. We reformulate the EnKF under the framework of Langevin dynamics, which leads to a new particle filtering algorithm, the so-called Langevinized EnKF. The Langevinized EnKF inherits the forecast-analysis procedure from the EnKF and the use of mini-batch data from the stochastic gradient Langevin-type algorithms, which make it scalable with respect to both the dimension and sample size. We prove that the Langevinized EnKF converges to the right filtering distribution in Wasserstein distance under the big data scenario that the dynamic system consists of a large number of stages and has a large number of samples observed at each stage. We reformulate the Bayesian inverse problem as a dynamic state estimation problem based on the techniques of subsampling and Langevin diffusion process. We illustrate the performance of the Langevinized EnKF using a variety of examples, including the Lorenz-96 model, high-dimensional variable selection, Bayesian deep learning, and Long Short Term Memory (LSTM) network learning with dynamic data.



قيم البحث

اقرأ أيضاً

We propose a new class of filtering and smoothing methods for inference in high-dimensional, nonlinear, non-Gaussian, spatio-temporal state-space models. The main idea is to combine the ensemble Kalman filter and smoother, developed in the geophysics literature, with state-space algorithms from the statistics literature. Our algorithms address a variety of estimation scenarios, including on-line and off-line state and parameter estimation. We take a Bayesian perspective, for which the goal is to generate samples from the joint posterior distribution of states and parameters. The key benefit of our approach is the use of ensemble Kalman methods for dimension reduction, which allows inference for high-dimensional state vectors. We compare our methods to existing ones, including ensemble Kalman filters, particle filters, and particle MCMC. Using a real data example of cloud motion and data simulated under a number of nonlinear and non-Gaussian scenarios, we show that our approaches outperform these existing methods.
We present a novel algorithm based on the ensemble Kalman filter to solve inverse problems involving multiscale elliptic partial differential equations. Our method is based on numerical homogenization and finite element discretization and allows to r ecover a highly oscillatory tensor from measurements of the multiscale solution in a computationally inexpensive manner. The properties of the approximate solution are analysed with respect to the multiscale and discretization parameters, and a convergence result is shown to hold. A reinterpretation of the solution from a Bayesian perspective is provided, and convergence of the approximate conditional posterior distribution is proved with respect to the Wasserstein distance. A numerical experiment validates our methodology, with a particular emphasis on modelling error and computational cost.
Ensemble filters implement sequential Bayesian estimation by representing the probability distribution by an ensemble mean and covariance. Unbiased square root ensemble filters use deterministic algorithms to produce an analysis (posterior) ensemble with prescribed mean and covariance, consistent with the Kalman update. This includes several filters used in practice, such as the Ensemble Transform Kalman Filter (ETKF), the Ensemble Adjustment Kalman Filter (EAKF), and a filter by Whitaker and Hamill. We show that at every time index, as the number of ensemble members increases to infinity, the mean and covariance of an unbiased ensemble square root filter converge to those of the Kalman filter, in the case a linear model and an initial distribution of which all moments exist. The convergence is in $L^{p}$ and the convergence rate does not depend on the model dimension. The result holds in the infinitely dimensional Hilbert space as well.
Filtering is a data assimilation technique that performs the sequential inference of dynamical systems states from noisy observations. Herein, we propose a machine learning-based ensemble conditional mean filter (ML-EnCMF) for tracking possibly high- dimensional non-Gaussian state models with nonlinear dynamics based on sparse observations. The proposed filtering method is developed based on the conditional expectation and numerically implemented using machine learning (ML) techniques combined with the ensemble method. The contribution of this work is twofold. First, we demonstrate that the ensembles assimilated using the ensemble conditional mean filter (EnCMF) provide an unbiased estimator of the Bayesian posterior mean, and their variance matches the expected conditional variance. Second, we implement the EnCMF using artificial neural networks, which have a significant advantage in representing nonlinear functions over high-dimensional domains such as the conditional mean. Finally, we demonstrate the effectiveness of the ML-EnCMF for tracking the states of Lorenz-63 and Lorenz-96 systems under the chaotic regime. Numerical results show that the ML-EnCMF outperforms the ensemble Kalman filter.
135 - Ivan Kasanicky , Jan Mandel , 2014
A new type of ensemble Kalman filter is developed, which is based on replacing the sample covariance in the analysis step by its diagonal in a spectral basis. It is proved that this technique improves the aproximation of the covariance when the covar iance itself is diagonal in the spectral basis, as is the case, e.g., for a second-order stationary random field and the Fourier basis. The method is extended by wavelets to the case when the state variables are random fields, which are not spatially homogeneous. Efficient implementations by the fast Fourier transform (FFT) and discrete wavelet transform (DWT) are presented for several types of observations, including high-dimensional data given on a part of the domain, such as radar and satellite images. Computational experiments confirm that the method performs well on the Lorenz 96 problem and the shallow water equations with very small ensembles and over multiple analysis cycles.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا