ترغب بنشر مسار تعليمي؟ اضغط هنا

Flexible results for quadratic forms with applications to variance components estimation

60   0   0.0 ( 0 )
 نشر من قبل Murat A. Erdogdu
 تاريخ النشر 2015
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

We derive convenient uniform concentration bounds and finite sample multivariate normal approximation results for quadratic forms, then describe some applications involving variance components estimation in linear random-effects models. Random-effects models and variance components estimation are classical topics in statistics, with a corresponding well-established asymptotic theory. However, our finite sample results for quadratic forms provide additional flexibility for easily analyzing random-effects models in non-standard settings, which are becoming more important in modern applications (e.g. genomics). For instance, in addition to deriving novel non-asymptotic bounds for variance components estimators in classical linear random-effects models, we provide a concentration bound for variance components estimators in linear models with correlated random-effects. Our general concentration bound is a uniform version of the Hanson-Wright inequality. The main normal approximation result in the paper is derived using Reinert and R{o}llins (2009) embedding technique and multivariate Steins method with exchangeable pairs.



قيم البحث

اقرأ أيضاً

In this work we study the estimation of the density of a totally positive random vector. Total positivity of the distribution of a random vector implies a strong form of positive dependence between its coordinates and, in particular, it implies posit ive association. Since estimating a totally positive density is a non-parametric problem, we take on a (modified) kernel density estimation approach. Our main result is that the sum of scaled standard Gaussian bumps centered at a min-max closed set provably yields a totally positive distribution. Hence, our strategy for producing a totally positive estimator is to form the min-max closure of the set of samples, and output a sum of Gaussian bumps centered at the points in this set. We can frame this sum as a convolution between the uniform distribution on a min-max closed set and a scaled standard Gaussian. We further conjecture that convolving any totally positive density with a standard Gaussian remains totally positive.
161 - T. Royen 2007
From a suitable integral representation of the Laplace transform of a positive semi-definite quadratic form of independent real random variables with not necessarily identical densities a univariate integral representation is derived for the cumulati ve distribution function of the sample variance of i.i.d. random variables with a gamma density, supplementing former formulas of the author. Furthermore, from the above Laplace transform Fourier series are obtained for the density and the distribution function of the sample variance of i.i.d. random variables with a uniform distribution. This distribution can be applied e.g. to a statistical test for a scale parameter.
We present new results for consistency of maximum likelihood estimators with a focus on multivariate mixed models. Our theory builds on the idea of using subsets of the full data to establish consistency of estimators based on the full data. It requi res neither that the data consist of independent observations, nor that the observations can be modeled as a stationary stochastic process. Compared to existing asymptotic theory using the idea of subsets we substantially weaken the assumptions, bringing them closer to what suffices in classical settings. We apply our theory in two multivariate mixed models for which it was unknown whether maximum likelihood estimators are consistent. The models we consider have non-stochastic predictors and multivariate responses which are possibly mixed-type (some discrete and some continuous).
The log-concave projection is an operator that maps a d-dimensional distribution P to an approximating log-concave density. Prior work by D{u}mbgen et al. (2011) establishes that, with suitable metrics on the underlying spaces, this projection is con tinuous, but not uniformly continuous. In this work we prove a local uniform continuity result for log-concave projection -- in particular, establishing that this map is locally H{o}lder-(1/4) continuous. A matching lower bound verifies that this exponent cannot be improved. We also examine the implications of this continuity result for the empirical setting -- given a sample drawn from a distribution P, we bound the squared Hellinger distance between the log-concave projection of the empirical distribution of the sample, and the log-concave projection of P. In particular, this yields interesting statistical results for the misspecified setting, where P is not itself log-concave.
Principal component analysis is an important pattern recognition and dimensionality reduction tool in many applications. Principal components are computed as eigenvectors of a maximum likelihood covariance $widehat{Sigma}$ that approximates a populat ion covariance $Sigma$, and these eigenvectors are often used to extract structural information about the variables (or attributes) of the studied population. Since PCA is based on the eigendecomposition of the proxy covariance $widehat{Sigma}$ rather than the ground-truth $Sigma$, it is important to understand the approximation error in each individual eigenvector as a function of the number of available samples. The recent results of Kolchinskii and Lounici yield such bounds. In the present paper we sharpen these bounds and show that eigenvectors can often be reconstructed to a required accuracy from a sample of strictly smaller size order.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا