ترغب بنشر مسار تعليمي؟ اضغط هنا

Asymptotic and spectral properties of exponentially phi-ergodic Markov processes

197   0   0.0 ( 0 )
 نشر من قبل Alexey Kulik
 تاريخ النشر 2009
  مجال البحث
والبحث باللغة English
 تأليف Alexey M. Kulik




اسأل ChatGPT حول البحث

New relations between ergodic rate, L_p convergence rates, and asymptotic behavior of tail probabilities for hitting times of a time homogeneous Markov process are established. For L_p convergence rates and related spectral and functional properties (spectral gap and Poincare inequality) sufficient conditions are given in the terms of an exponential phi-coupling. This provides sufficient conditions for L_p convergence rates in the terms of appropriate combination of `local mixing and `recurrence conditions on the initial process, typical in the ergodic theory of Markov processes. The range of application of the approach includes time-irreversible processes. In particular, sufficient conditions for spectral gap property for Levy driven Ornstein-Uhlenbeck process are established.



قيم البحث

اقرأ أيضاً

131 - M. Hairer 2009
We study a fairly general class of time-homogeneous stochastic evolutions driven by noises that are not white in time. As a consequence, the resulting processes do not have the Markov property. In this setting, we obtain constructive criteria for the uniqueness of stationary solutions that are very close in spirit to the existing criteria for Markov processes. In the case of discrete time, where the driving noise consists of a stationary sequence of Gaussian random variables, we give optimal conditions on the spectral measure for our criteria to be applicable. In particular, we show that under a certain assumption on the spectral density, our assumptions can be checked in virtually the same way as one would check that the Markov process obtained by replacing the driving sequence by a sequence of independent identically distributed Gaussian random variables is strong Feller and topologically irreducible. The results of the present article are based on those obtained previously in the continuous time context of diffusions driven by fractional Brownian motion.
We propose a twisted Szegedy walk for estimating the limit behavior of a discrete-time quantum walk on a crystal lattice, an infinite abelian covering graph, whose notion was introduced by [14]. First, we show that the spectrum of the twisted Szegedy walk on the quotient graph can be expressed by mapping the spectrum of a twisted random walk onto the unit circle. Secondly, we show that the spatial Fourier transform of the twisted Szegedy walk on a finite graph with appropriate parameters becomes the Grover walk on its infinite abelian covering graph. Finally, as an application, we show that if the Betti number of the quotient graph is strictly greater than one, then localization is ensured with some appropriated initial state. We also compute the limit density function for the Grover walk on $mathbb{Z}^d$ with flip flop shift, which implies the coexistence of linear spreading and localization. We partially obtain the abstractive shape of the limit density function: the support is within the $d$-dimensional sphere of radius $1/sqrt{d}$, and $2^d$ singular points reside on the spheres surface.
We construct a family of genealogy-valued Markov processes that are induced by a continuous-time Markov population process. We derive exact expressions for the likelihood of a given genealogy conditional on the history of the underlying population pr ocess. These lead to a version of the nonlinear filtering equation, which can be used to design efficient Monte Carlo inference algorithms. Existing full-information approaches for phylodynamic inference are special cases of the theory.
This paper describes the structure of solutions to Kolmogorovs equations for nonhomogeneous jump Markov processes and applications of these results to control of jump stochastic systems. These equations were studied by Feller (1940), who clarified in 1945 in the errata to that paper that some of its results covered only nonexplosive Markov processes. We present the results for possibly explosive Markov processes. The paper is based on the invited talk presented by the authors at the International Conference dedicated to the 200th anniversary of the birth of P. L.~Chebyshev.
220 - Anru Zhang , Mengdi Wang 2018
Model reduction of Markov processes is a basic problem in modeling state-transition systems. Motivated by the state aggregation approach rooted in control theory, we study the statistical state compression of a discrete-state Markov chain from empiri cal trajectories. Through the lens of spectral decomposition, we study the rank and features of Markov processes, as well as properties like representability, aggregability, and lumpability. We develop spectral methods for estimating the transition matrix of a low-rank Markov model, estimating the leading subspace spanned by Markov features, and recovering latent structures like state aggregation and lumpable partition of the state space. We prove statistical upper bounds for the estimation errors and nearly matching minimax lower bounds. Numerical studies are performed on synthetic data and a dataset of New York City taxi trips.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا