Do you want to publish a course? Click here

A Markov basis for two-state toric homogeneous Markov chain model without initial parameters

242   0   0.0 ( 0 )
 Added by Hisayuki Hara
 Publication date 2010
and research's language is English




Ask ChatGPT about the research

We derive a Markov basis consisting of moves of degree at most three for two-state toric homogeneous Markov chain model of arbitrary length without parameters for initial states. Our basis consists of moves of degree three and degree one, which alter the initial frequencies, in addition to moves of degree two and degree one for toric homogeneous Markov chain model with parameters for initial states.

rate research

Read More

Markov chain models are used in various fields, such behavioral sciences or econometrics. Although the goodness of fit of the model is usually assessed by large sample approximation, it is desirable to use conditional tests if the sample size is not large. We study Markov bases for performing conditional tests of the toric homogeneous Markov chain model, which is the envelope exponential family for the usual homogeneous Markov chain model. We give a complete description of a Markov basis for the following cases: i) two-state, arbitrary length, ii) arbitrary finite state space and length of three. The general case remains to be a conjecture. We also present a numerical example of conditional tests based on our Markov basis.
We consider the three-state toric homogeneous Markov chain model (THMC) without loops and initial parameters. At time $T$, the size of the design matrix is $6 times 3cdot 2^{T-1}$ and the convex hull of its columns is the model polytope. We study the behavior of this polytope for $Tgeq 3$ and we show that it is defined by 24 facets for all $Tge 5$. Moreover, we give a complete description of these facets. From this, we deduce that the toric ideal associated with the design matrix is generated by binomials of degree at most 6. Our proof is based on a result due to Sturmfels, who gave a bound on the degree of the generators of a toric ideal, provided the normality of the corresponding toric variety. In our setting, we established the normality of the toric variety associated to the THMC model by studying the geometric properties of the model polytope.
We prove the conjecture by Diaconis and Eriksson (2006) that the Markov degree of the Birkhoff model is three. In fact, we prove the conjecture in a generalization of the Birkhoff model, where each voter is asked to rank a fixed number, say r, of candidates among all candidates. We also give an exhaustive characterization of Markov bases for small r.
We study the following learning problem with dependent data: Observing a trajectory of length $n$ from a stationary Markov chain with $k$ states, the goal is to predict the next state. For $3 leq k leq O(sqrt{n})$, using techniques from universal compression, the optimal prediction risk in Kullback-Leibler divergence is shown to be $Theta(frac{k^2}{n}log frac{n}{k^2})$, in contrast to the optimal rate of $Theta(frac{log log n}{n})$ for $k=2$ previously shown in Falahatgar et al., 2016. These rates, slower than the parametric rate of $O(frac{k^2}{n})$, can be attributed to the memory in the data, as the spectral gap of the Markov chain can be arbitrarily small. To quantify the memory effect, we study irreducible reversible chains with a prescribed spectral gap. In addition to characterizing the optimal prediction risk for two states, we show that, as long as the spectral gap is not excessively small, the prediction risk in the Markov model is $O(frac{k^2}{n})$, which coincides with that of an iid model with the same number of parameters.
Calculating a Monte Carlo standard error (MCSE) is an important step in the statistical analysis of the simulation output obtained from a Markov chain Monte Carlo experiment. An MCSE is usually based on an estimate of the variance of the asymptotic normal distribution. We consider spectral and batch means methods for estimating this variance. In particular, we establish conditions which guarantee that these estimators are strongly consistent as the simulation effort increases. In addition, for the batch means and overlapping batch means methods we establish conditions ensuring consistency in the mean-square sense which in turn allows us to calculate the optimal batch size up to a constant of proportionality. Finally, we examine the empirical finite-sample properties of spectral variance and batch means estimators and provide recommendations for practitioners.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا