ﻻ يوجد ملخص باللغة العربية
The spectral gap of a Markov chain can be bounded by the spectral gaps of constituent restriction chains and a projection chain, and the strength of such a bound is the content of various decomposition theorems. In this paper, we introduce a new parameter that allows us to improve upon these bounds. We further define a notion of orthogonality between the restriction chains and complementary restriction chains. This leads to a new Complementary Decomposition theorem, which does not require analyzing the projection chain. For $epsilon$-orthogonal chains, this theorem may be iterated $O(1/epsilon)$ times while only giving away a constant multiplicative factor on the overall spectral gap. As an application, we provide a $1/n$-orthogonal decomposition of the nearest neighbor Markov chain over $k$-class biased monotone permutations on [$n$], as long as the number of particles in each class is at least $Clog n$. This allows us to apply the Complementary Decomposition theorem iteratively $n$ times to prove the first polynomial bound on the spectral gap when $k$ is as large as $Theta(n/log n)$. The previous best known bound assumed $k$ was at most a constant.
We present a new lower bound on the spectral gap of the Glauber dynamics for the Gibbs distribution of a spectrally independent $q$-spin system on a graph $G = (V,E)$ with maximum degree $Delta$. Notably, for several interesting examples, our bound c
In this paper we introduce a notion of spectral approximation for directed graphs. While there are many potential ways one might define approximation for directed graphs, most of them are too strong to allow sparse approximations in general. In contr
We study the following learning problem with dependent data: Observing a trajectory of length $n$ from a stationary Markov chain with $k$ states, the goal is to predict the next state. For $3 leq k leq O(sqrt{n})$, using techniques from universal com
The spectral gap $gamma$ of an ergodic and reversible Markov chain is an important parameter measuring the asymptotic rate of convergence. In applications, the transition matrix $P$ may be unknown, yet one sample of the chain up to a fixed time $t$ m
Convergence rates of Markov chains have been widely studied in recent years. In particular, quantitative bounds on convergence rates have been studied in various forms by Meyn and Tweedie [Ann. Appl. Probab. 4 (1994) 981-1101], Rosenthal [J. Amer. St