ترغب بنشر مسار تعليمي؟ اضغط هنا

122 - R. Douc , A. Guillin , J.-M. Marin 2007
In the design of efficient simulation algorithms, one is often beset with a poor choice of proposal distributions. Although the performance of a given simulation kernel can clarify a posteriori how adequate this kernel is for the problem at hand, a p ermanent on-line modification of kernels causes concerns about the validity of the resulting algorithm. While the issue is most often intractable for MCMC algorithms, the equivalent version for importance sampling algorithms can be validated quite precisely. We derive sufficient convergence conditions for adaptive mixtures of population Monte Carlo algorithms and show that Rao--Blackwelliz
In the last decade, sequential Monte-Carlo methods (SMC) emerged as a key tool in computational statistics. These algorithms approximate a sequence of distributions by a sequence of weighted empirical measures associated to a weighted population of p articles. These particles and weights are generated recursively according to elementary transformations: mutation and selection. Examples of applications include the sequential Monte-Carlo techniques to solve optimal non-linear filtering problems in state-space models, molecular simulation, genetic optimization, etc. Despite many theoretical advances the asymptotic property of these approximations remains of course a question of central interest. In this paper, we analyze sequential Monte Carlo methods from an asymptotic perspective, that is, we establish law of large numbers and invariance principle as the number of particles gets large. We introduce the concepts of weighted sample consistency and asymptotic normality, and derive conditions under which the mutation and the selection procedure used in the sequential Monte-Carlo build-up preserve these properties. To illustrate our findings, we analyze SMC algorithms to approximate the filtering distribution in state-space models. We show how our techniques allow to relax restrictive technical conditions used in previously reported works and provide grounds to analyze more sophisticated sequential sampling strategies.
Convergence rates of Markov chains have been widely studied in recent years. In particular, quantitative bounds on convergence rates have been studied in various forms by Meyn and Tweedie [Ann. Appl. Probab. 4 (1994) 981-1101], Rosenthal [J. Amer. St atist. Assoc. 90 (1995) 558-566], Roberts and Tweedie [Stochastic Process. Appl. 80 (1999) 211-229], Jones and Hobert [Statist. Sci. 16 (2001) 312-334] and Fort [Ph.D. thesis (2001) Univ. Paris VI]. In this paper, we extend a result of Rosenthal [J. Amer. Statist. Assoc. 90 (1995) 558-566] that concerns quantitative convergence rates for time-homogeneous Markov chains. Our extension allows us to consider f-total variation distance (instead of total variation) and time-inhomogeneous Markov chains. We apply our results to simulated annealing.
240 - R. Douc , A. Guillin , J. Najim 2004
Consider the state space model (X_t,Y_t), where (X_t) is a Markov chain, and (Y_t) are the observations. In order to solve the so-called filtering problem, one has to compute L(X_t|Y_1,...,Y_t), the law of X_t given the observations (Y_1,...,Y_t). Th e particle filtering method gives an approximation of the law L(X_t|Y_1,...,Y_t) by an empirical measure frac{1}{n}sum_1^ndelta_{x_{i,t}}. In this paper we establish the moderate deviation principle for the empirical mean frac{1}{n}sum_1^npsi(x_{i,t}) (centered and properly rescaled) when the number of particles grows to infinity, enhancing the central limit theorem. Several extensions and examples are also studied.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا