Do you want to publish a course? Click here

Convergence of the Square Root Ensemble Kalman Filter in the Large Ensemble Limit

215   0   0.0 ( 0 )
 Added by Jan Mandel
 Publication date 2014
and research's language is English




Ask ChatGPT about the research

Ensemble filters implement sequential Bayesian estimation by representing the probability distribution by an ensemble mean and covariance. Unbiased square root ensemble filters use deterministic algorithms to produce an analysis (posterior) ensemble with prescribed mean and covariance, consistent with the Kalman update. This includes several filters used in practice, such as the Ensemble Transform Kalman Filter (ETKF), the Ensemble Adjustment Kalman Filter (EAKF), and a filter by Whitaker and Hamill. We show that at every time index, as the number of ensemble members increases to infinity, the mean and covariance of an unbiased ensemble square root filter converge to those of the Kalman filter, in the case a linear model and an initial distribution of which all moments exist. The convergence is in $L^{p}$ and the convergence rate does not depend on the model dimension. The result holds in the infinitely dimensional Hilbert space as well.



rate research

Read More

116 - Zhiyan Ding , Qin Li 2019
Ensemble Kalman Sampler (EKS) is a method to find approximately $i.i.d.$ samples from a target distribution. As of today, why the algorithm works and how it converges is mostly unknown. The continuous version of the algorithm is a set of coupled stochastic differential equations (SDEs). In this paper, we prove the wellposedness of the SDE system, justify its mean-field limit is a Fokker-Planck equation, whose long time equilibrium is the target distribution. We further demonstrate that the convergence rate is near-optimal ($J^{-1/2}$, with $J$ being the number of particles). These results, combined with the in-time convergence of the Fokker-Planck equation to its equilibrium, justify the validity of EKS, and provide the convergence rate as a sampling method.
We present a novel algorithm based on the ensemble Kalman filter to solve inverse problems involving multiscale elliptic partial differential equations. Our method is based on numerical homogenization and finite element discretization and allows to recover a highly oscillatory tensor from measurements of the multiscale solution in a computationally inexpensive manner. The properties of the approximate solution are analysed with respect to the multiscale and discretization parameters, and a convergence result is shown to hold. A reinterpretation of the solution from a Bayesian perspective is provided, and convergence of the approximate conditional posterior distribution is proved with respect to the Wasserstein distance. A numerical experiment validates our methodology, with a particular emphasis on modelling error and computational cost.
Ensemble methods, such as the ensemble Kalman filter (EnKF), the local ensemble transform Kalman filter (LETKF), and the ensemble Kalman smoother (EnKS) are widely used in sequential data assimilation, where state vectors are of huge dimension. Little is known, however, about the asymptotic behavior of ensemble methods. In this paper, we prove convergence in L^p of ensemble Kalman smoother to the Kalman smoother in the large-ensemble limit, as well as the convergence of EnKS-4DVAR, which is a Levenberg-Marquardt-like algorithm with EnKS as the linear solver, to the classical Levenberg-Marquardt algorithm in which the linearized problem is solved exactly.
The Ensemble Kalman Filter (EnKF) has achieved great successes in data assimilation in atmospheric and oceanic sciences, but its failure in convergence to the right filtering distribution precludes its use for uncertainty quantification. We reformulate the EnKF under the framework of Langevin dynamics, which leads to a new particle filtering algorithm, the so-called Langevinized EnKF. The Langevinized EnKF inherits the forecast-analysis procedure from the EnKF and the use of mini-batch data from the stochastic gradient Langevin-type algorithms, which make it scalable with respect to both the dimension and sample size. We prove that the Langevinized EnKF converges to the right filtering distribution in Wasserstein distance under the big data scenario that the dynamic system consists of a large number of stages and has a large number of samples observed at each stage. We reformulate the Bayesian inverse problem as a dynamic state estimation problem based on the techniques of subsampling and Langevin diffusion process. We illustrate the performance of the Langevinized EnKF using a variety of examples, including the Lorenz-96 model, high-dimensional variable selection, Bayesian deep learning, and Long Short Term Memory (LSTM) network learning with dynamic data.
Variational Inference (VI) combined with Bayesian nonlinear filtering produces the state-of-the-art results for latent trajectory inference. A body of recent works focused on Sequential Monte Carlo (SMC) and its expansion, e.g., Forward Filtering Backward Simulation (FFBSi). These studies achieved a great success, however, remain a serious problem for particle degeneracy. In this paper, we propose Ensemble Kalman Objectives (EnKOs), the hybrid method of VI and Ensemble Kalman Filter (EnKF), to infer the State Space Models (SSMs). Unlike the SMC based methods, the our proposed method can identify the latent dynamics given fewer particles because of its rich particle diversity. We demonstrate that EnKOs outperform the SMC based methods in terms of predictive ability for three benchmark nonlinear dynamics systems tasks.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا