ﻻ يوجد ملخص باللغة العربية
We derive convenient uniform concentration bounds and finite sample multivariate normal approximation results for quadratic forms, then describe some applications involving variance components estimation in linear random-effects models. Random-effects models and variance components estimation are classical topics in statistics, with a corresponding well-established asymptotic theory. However, our finite sample results for quadratic forms provide additional flexibility for easily analyzing random-effects models in non-standard settings, which are becoming more important in modern applications (e.g. genomics). For instance, in addition to deriving novel non-asymptotic bounds for variance components estimators in classical linear random-effects models, we provide a concentration bound for variance components estimators in linear models with correlated random-effects. Our general concentration bound is a uniform version of the Hanson-Wright inequality. The main normal approximation result in the paper is derived using Reinert and R{o}llins (2009) embedding technique and multivariate Steins method with exchangeable pairs.
In this work we study the estimation of the density of a totally positive random vector. Total positivity of the distribution of a random vector implies a strong form of positive dependence between its coordinates and, in particular, it implies posit
From a suitable integral representation of the Laplace transform of a positive semi-definite quadratic form of independent real random variables with not necessarily identical densities a univariate integral representation is derived for the cumulati
We present new results for consistency of maximum likelihood estimators with a focus on multivariate mixed models. Our theory builds on the idea of using subsets of the full data to establish consistency of estimators based on the full data. It requi
The log-concave projection is an operator that maps a d-dimensional distribution P to an approximating log-concave density. Prior work by D{u}mbgen et al. (2011) establishes that, with suitable metrics on the underlying spaces, this projection is con
Principal component analysis is an important pattern recognition and dimensionality reduction tool in many applications. Principal components are computed as eigenvectors of a maximum likelihood covariance $widehat{Sigma}$ that approximates a populat