ترغب بنشر مسار تعليمي؟ اضغط هنا

On the Convergence of a Non-linear Ensemble Kalman Smoother

72   0   0.0 ( 0 )
 نشر من قبل El houcine Bergou ^2
 تاريخ النشر 2014
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

Ensemble methods, such as the ensemble Kalman filter (EnKF), the local ensemble transform Kalman filter (LETKF), and the ensemble Kalman smoother (EnKS) are widely used in sequential data assimilation, where state vectors are of huge dimension. Little is known, however, about the asymptotic behavior of ensemble methods. In this paper, we prove convergence in L^p of ensemble Kalman smoother to the Kalman smoother in the large-ensemble limit, as well as the convergence of EnKS-4DVAR, which is a Levenberg-Marquardt-like algorithm with EnKS as the linear solver, to the classical Levenberg-Marquardt algorithm in which the linearized problem is solved exactly.

قيم البحث

اقرأ أيضاً

Ensemble filters implement sequential Bayesian estimation by representing the probability distribution by an ensemble mean and covariance. Unbiased square root ensemble filters use deterministic algorithms to produce an analysis (posterior) ensemble with prescribed mean and covariance, consistent with the Kalman update. This includes several filters used in practice, such as the Ensemble Transform Kalman Filter (ETKF), the Ensemble Adjustment Kalman Filter (EAKF), and a filter by Whitaker and Hamill. We show that at every time index, as the number of ensemble members increases to infinity, the mean and covariance of an unbiased ensemble square root filter converge to those of the Kalman filter, in the case a linear model and an initial distribution of which all moments exist. The convergence is in $L^{p}$ and the convergence rate does not depend on the model dimension. The result holds in the infinitely dimensional Hilbert space as well.
116 - Zhiyan Ding , Qin Li 2019
Ensemble Kalman Sampler (EKS) is a method to find approximately $i.i.d.$ samples from a target distribution. As of today, why the algorithm works and how it converges is mostly unknown. The continuous version of the algorithm is a set of coupled stoc hastic differential equations (SDEs). In this paper, we prove the wellposedness of the SDE system, justify its mean-field limit is a Fokker-Planck equation, whose long time equilibrium is the target distribution. We further demonstrate that the convergence rate is near-optimal ($J^{-1/2}$, with $J$ being the number of particles). These results, combined with the in-time convergence of the Fokker-Planck equation to its equilibrium, justify the validity of EKS, and provide the convergence rate as a sampling method.
62 - Daniel Lacker 2018
This paper continues the study of the mean field game (MFG) convergence problem: In what sense do the Nash equilibria of $n$-player stochastic differential games converge to the mean field game as $nrightarrowinfty$? Previous work on this problem too k two forms. First, when the $n$-player equilibria are open-loop, compactness arguments permit a characterization of all limit points of $n$-player equilibria as weak MFG equilibria, which contain additional randomness compared to the standard (strong) equilibrium concept. On the other hand, when the $n$-player equilibria are closed-loop, the convergence to the MFG equilibrium is known only when the MFG equilibrium is unique and the associated master equation is solvable and sufficiently smooth. This paper adapts the compactness arguments to the closed-loop case, proving a convergence theorem that holds even when the MFG equilibrium is non-unique. Every limit point of $n$-player equilibria is shown to be the same kind of weak MFG equilibrium as in the open-loop case. Some partial results and examples are discussed for the converse question, regarding which of the weak MFG equilibria can arise as the limit of $n$-player (approximate) equilibria.
We prove rates of convergence for the circular law for the complex Ginibre ensemble. Specifically, we bound the expected $L_p$-Wasserstein distance between the empirical spectral measure of the normalized complex Ginibre ensemble and the uniform meas ure on the unit disc, both in expectation and almost surely. For $1 le p le 2$, the bounds are of the order $n^{-1/4}$, up to logarithmic factors.
We analyze the convergence rate of gradient flows on objective functions induced by Dropout and Dropconnect, when applying them to shallow linear Neural Networks (NNs) - which can also be viewed as doing matrix factorization using a particular regula rizer. Dropout algorithms such as these are thus regularization techniques that use 0,1-valued random variables to filter weights during training in order to avoid coadaptation of features. By leveraging a recent result on nonconvex optimization and conducting a careful analysis of the set of minimizers as well as the Hessian of the loss function, we are able to obtain (i) a local convergence proof of the gradient flow and (ii) a bound on the convergence rate that depends on the data, the dropout probability, and the width of the NN. Finally, we compare this theoretical bound to numerical simulations, which are in qualitative agreement with the convergence bound and match it when starting sufficiently close to a minimizer.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا