ﻻ يوجد ملخص باللغة العربية
Ensemble methods, such as the ensemble Kalman filter (EnKF), the local ensemble transform Kalman filter (LETKF), and the ensemble Kalman smoother (EnKS) are widely used in sequential data assimilation, where state vectors are of huge dimension. Little is known, however, about the asymptotic behavior of ensemble methods. In this paper, we prove convergence in L^p of ensemble Kalman smoother to the Kalman smoother in the large-ensemble limit, as well as the convergence of EnKS-4DVAR, which is a Levenberg-Marquardt-like algorithm with EnKS as the linear solver, to the classical Levenberg-Marquardt algorithm in which the linearized problem is solved exactly.
Ensemble filters implement sequential Bayesian estimation by representing the probability distribution by an ensemble mean and covariance. Unbiased square root ensemble filters use deterministic algorithms to produce an analysis (posterior) ensemble
Ensemble Kalman Sampler (EKS) is a method to find approximately $i.i.d.$ samples from a target distribution. As of today, why the algorithm works and how it converges is mostly unknown. The continuous version of the algorithm is a set of coupled stoc
This paper continues the study of the mean field game (MFG) convergence problem: In what sense do the Nash equilibria of $n$-player stochastic differential games converge to the mean field game as $nrightarrowinfty$? Previous work on this problem too
We prove rates of convergence for the circular law for the complex Ginibre ensemble. Specifically, we bound the expected $L_p$-Wasserstein distance between the empirical spectral measure of the normalized complex Ginibre ensemble and the uniform meas
We analyze the convergence rate of gradient flows on objective functions induced by Dropout and Dropconnect, when applying them to shallow linear Neural Networks (NNs) - which can also be viewed as doing matrix factorization using a particular regula