Do you want to publish a course? Click here

Moderate deviations for empirical measures for nonhomogeneous Markov chains

131   0   0.0 ( 0 )
 Added by Mingzhou Xu
 Publication date 2020
  fields
and research's language is English




Ask ChatGPT about the research

We prove that moderate deviations for empirical measures for countable nonhomogeneous Markov chains hold under the assumption of uniform convergence of transition probability matrices for countable nonhomogeneous Markov chains in Ces`aro sense.

rate research

Read More

Our purpose is to prove central limit theorem for countable nonhomogeneous Markov chain under the condition of uniform convergence of transition probability matrices for countable nonhomogeneous Markov chain in Ces`aro sense. Furthermore, we obtain a corresponding moderate deviation theorem for countable nonhomogeneous Markov chain by Gartner-Ellis theorem and exponential equivalent method.
86 - Xiaofeng Xue 2019
The density-dependent Markov chain (DDMC) introduced in cite{Kurtz1978} is a continuous time Markov process applied in fields such as epidemics, chemical reactions and so on. In this paper, we give moderate deviation principles of paths of DDMC under some generally satisfied assumptions. The proofs for the lower and upper bounds of our main result utilize an exponential martingale and a generalized version of Girsanovs theorem. The exponential martingale is defined according to the generator of DDMC.
240 - R. Douc , A. Guillin , J. Najim 2004
Consider the state space model (X_t,Y_t), where (X_t) is a Markov chain, and (Y_t) are the observations. In order to solve the so-called filtering problem, one has to compute L(X_t|Y_1,...,Y_t), the law of X_t given the observations (Y_1,...,Y_t). The particle filtering method gives an approximation of the law L(X_t|Y_1,...,Y_t) by an empirical measure frac{1}{n}sum_1^ndelta_{x_{i,t}}. In this paper we establish the moderate deviation principle for the empirical mean frac{1}{n}sum_1^npsi(x_{i,t}) (centered and properly rescaled) when the number of particles grows to infinity, enhancing the central limit theorem. Several extensions and examples are also studied.
126 - Shui Feng , Fuqing Gao 2008
The Poisson--Dirichlet distribution arises in many different areas. The parameter $theta$ in the distribution is the scaled mutation rate of a population in the context of population genetics. The limiting case of $theta$ approaching infinity is practically motivated and has led to new, interesting mathematical structures. Laws of large numbers, fluctuation theorems and large-deviation results have been established. In this paper, moderate-deviation principles are established for the Poisson--Dirichlet distribution, the GEM distribution, the homozygosity, and the Dirichlet process when the parameter $theta$ approaches infinity. These results, combined with earlier work, not only provide a relatively complete picture of the asymptotic behavior of the Poisson--Dirichlet distribution for large $theta$, but also lead to a better understanding of the large deviation problem associated with the scaled homozygosity. They also reveal some new structures that are not observed in existing large-deviation results.
61 - Zeyu Zheng , Harsha Honnappa , 2018
This paper is concerned with the development of rigorous approximations to various expectations associated with Markov chains and processes having non-stationary transition probabilities. Such non-stationary models arise naturally in contexts in which time-of-day effects or seasonality effects need to be incorporated. Our approximations are valid asymptotically in regimes in which the transition probabilities change slowly over time. Specifically, we develop approximations for the expected infinite horizon discounted reward, the expected reward to the hitting time of a set, the expected reward associated with the state occupied by the chain at time $n$, and the expected cumulative reward over an interval $[0,n]$. In each case, the approximation involves a linear system of equations identical in form to that which one would need to solve to compute the corresponding quantity for a Markov model having stationary transition probabilities. In that sense, the theory provides an approximation no harder to compute than in the traditional stationary context. While most of the theory is developed for finite state Markov chains, we also provide generalizations to continuous state Markov chains, and finite state Markov jump processes in continuous time. In the latter context, one of our approximations coincides with the uniform acceleration asymptotic due to Massey and Whitt (1998).
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا