Do you want to publish a course? Click here

Hierarchical Models, Marginal Polytopes, and Linear Codes

143   0   0.0 ( 0 )
 Added by Thomas Kahle
 Publication date 2008
and research's language is English




Ask ChatGPT about the research

In this paper, we explore a connection between binary hierarchical models, their marginal polytopes and codeword polytopes, the convex hulls of linear codes. The class of linear codes that are realizable by hierarchical models is determined. We classify all full dimensional polytopes with the property that their vertices form a linear code and give an algorithm that determines them.



rate research

Read More

The existence of the maximum likelihood estimate in hierarchical loglinear models is crucial to the reliability of inference for this model. Determining whether the estimate exists is equivalent to finding whether the sufficient statistics vector $t$ belongs to the boundary of the marginal polytope of the model. The dimension of the smallest face $F_t$ containing $t$ determines the dimension of the reduced model which should be considered for correct inference. For higher-dimensional problems, it is not possible to compute $F_{t}$ exactly. Massam and Wang (2015) found an outer approximation to $F_t$ using a collection of sub-models of the original model. This paper refines the methodology to find an outer approximation and devises a new methodology to find an inner approximation. The inner approximation is given not in terms of a face of the marginal polytope, but in terms of a subset of the vertices of $F_t$. Knowing $F_t$ exactly indicates which cell probabilities have maximum likelihood estimates equal to $0$. When $F_t$ cannot be obtained exactly, we can use, first, the outer approximation $F_2$ to reduce the dimension of the problem and, then, the inner approximation $F_1$ to obtain correct estimates of cell probabilities corresponding to elements of $F_1$ and improve the estimates of the remaining probabilities corresponding to elements in $F_2setminus F_1$. Using both real-world and simulated data, we illustrate our results, and show that our methodology scales to high dimensions.
For statistical analysis of multiway contingency tables we propose modeling interaction terms in each maximal compact component of a hierarchical model. By this approach we can search for parsimonious models with smaller degrees of freedom than the usual hierarchical model, while preserving conditional independence structures in the hierarchical model. We discuss estimation and exacts tests of the proposed model and illustrate the advantage of the proposed modeling with some data sets.
For some variants of regression models, including partial, measurement error or error-in-variables, latent effects, semi-parametric and otherwise corrupted linear models, the classical parametric tests generally do not perform well. Various modifications and generalizations considered extensively in the literature rests on stringent regularity assumptions which are not likely to be tenable in many applications. However, in such non-standard cases, rank based tests can be adapted better, and further, incorporation of rank analysis of covariance tools enhance their power-efficiency. Numerical studies and a real data illustration show the superiority of rank based inference in such corrupted linear models.
We introduce estimation and test procedures through divergence minimization for models satisfying linear constraints with unknown parameter. Several statistical examples and motivations are given. These procedures extend the empirical likelihood (EL) method and share common features with generalized empirical likelihood (GEL). We treat the problems of existence and characterization of the divergence projections of probability measures on sets of signed finite measures. Our approach allows for a study of the estimates under misspecification. The asymptotic behavior of the proposed estimates are studied using the dual representation of the divergences and the explicit forms of the divergence projections. We discuss the problem of the choice of the divergence under various respects. Also we handle efficiency and robustness properties of minimum divergence estimates. A simulation study shows that the Hellinger divergence enjoys good efficiency and robustness properties.
We consider the problem of the identification of stationary solutions to linear rational expectations models from the second moments of observable data. Observational equivalence is characterized and necessary and sufficient conditions are provided for: (i) identification under affine restrictions, (ii) generic identification under affine restrictions of analytically parametrized models, and (iii) local identification under non-linear restrictions. The results strongly resemble the classical theory for VARMA models although significant points of departure are also documented.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا