Do you want to publish a course? Click here

Quadratic distances on probabilities: A unified foundation

123   0   0.0 ( 0 )
 Added by Marianthi Markatou
 Publication date 2008
and research's language is English




Ask ChatGPT about the research

This work builds a unified framework for the study of quadratic form distance measures as they are used in assessing the goodness of fit of models. Many important procedures have this structure, but the theory for these methods is dispersed and incomplete. Central to the statistical analysis of these distances is the spectral decomposition of the kernel that generates the distance. We show how this determines the limiting distribution of natural goodness-of-fit tests. Additionally, we develop a new notion, the spectral degrees of freedom of the test, based on this decomposition. The degrees of freedom are easy to compute and estimate, and can be used as a guide in the construction of useful procedures in this class.



rate research

Read More

An important problem in large scale inference is the identification of variables that have large correlations or partial correlations. Recent work has yielded breakthroughs in the ultra-high dimensional setting when the sample size $n$ is fixed and the dimension $p rightarrow infty$ ([Hero, Rajaratnam 2011, 2012]). Despite these advances, the correlation screening framework suffers from some serious practical, methodological and theoretical deficiencies. For instance, theoretical safeguards for partial correlation screening requires that the population covariance matrix be block diagonal. This block sparsity assumption is however highly restrictive in numerous practical applications. As a second example, results for correlation and partial correlation screening framework requires the estimation of dependence measures or functionals, which can be highly prohibitive computationally. In this paper, we propose a unifying approach to correlation and partial correlation mining which specifically goes beyond the block diagonal correlation structure, thus yielding a methodology that is suitable for modern applications. By making connections to random geometric graphs, the number of highly correlated or partial correlated variables are shown to have novel compound Poisson finite-sample characterizations, which hold for both the finite $p$ case and when $p rightarrow infty$. The unifying framework also demonstrates an important duality between correlation and partial correlation screening with important theoretical and practical consequences.
Distance correlation is a new measure of dependence between random vectors. Distance covariance and distance correlation are analogous to product-moment covariance and correlation, but unlike the classical definition of correlation, distance correlation is zero only if the random vectors are independent. The empirical distance dependence measures are based on certain Euclidean distances between sample elements rather than sample moments, yet have a compact representation analogous to the classical covariance and correlation. Asymptotic properties and applications in testing independence are discussed. Implementation of the test and Monte Carlo results are also presented.
211 - Hidehiko Kamiya 2014
Two Bayesian models with different sampling densities are said to be marginally equivalent if the joint distribution of observables and the parameter of interest is the same for both models. We discuss marginal equivalence in the general framework of group invariance. We introduce a class of sampling models and establish marginal equivalence when the prior for the nuisance parameter is relatively invariant. We also obtain some robustness properties of invariant statistics under our sampling models. Besides the prototypical example of $v$-spherical distributions, we apply our general results to two examples---analysis of affine shapes and principal component analysis.
Recently, the well known Liu estimator (Liu, 1993) is attracted researchers attention in regression parameter estimation for an ill conditioned linear model. It is also argued that imposing sub-space hypothesis restriction on parameters improves estimation by shrinking toward non-sample information. Chang (2015) proposed the almost unbiased Liu estimator (AULE) in the binary logistic regression. In this article, some improved unbiased Liu type estimators, namely, restricted AULE, preliminary test AULE, Stein-type shrinkage AULE and its positive part for estimating the regression parameters in the binary logistic regression model are proposed based on the work Chang (2015). The performances of the newly defined estimators are analysed through some numerical results. A real data example is also provided to support the findings.
116 - Weizhen Wang 2021
We introduce a general method, named the h-function method, to unify the constructions of level-alpha exact test and 1-alpha exact confidence interval. Using this method, any confidence interval is improved as follows: i) an approximate interval, including a point estimator, is modified to an exact interval; ii) an exact interval is refined to be an interval that is a subset of the previous one. Two real datasets are used to illustrate the method.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا