Do you want to publish a course? Click here

Admissibility of the usual confidence interval in linear regression

326   0   0.0 ( 0 )
 Added by Paul Kabaila
 Publication date 2010
and research's language is English




Ask ChatGPT about the research

Consider a linear regression model with independent and identically normally distributed random errors. Suppose that the parameter of interest is a specified linear combination of the regression parameters. We prove that the usual confidence interval for this parameter is admissible within a broad class of confidence intervals.



rate research

Read More

304 - Hannes Leeb , Paul Kabaila 2018
In the Gaussian linear regression model (with unknown mean and variance), we show that the standard confidence set for one or two regression coefficients is admissible in the sense of Joshi (1969). This solves a long-standing open problem in mathematical statistics, and this has important implications on the performance of modern inference procedures post-model-selection or post-shrinkage, particularly in situations where the number of parameters is larger than the sample size. As a technical contribution of independent interest, we introduce a new class of conjugate priors for the Gaussian location-scale model.
138 - Eric Gautier 2018
This was a revision of arXiv:1105.2454v1 from 2012. It considers a variation on the STIV estimator where, instead of one conic constraint, there are as many conic constraints as moments (instruments) allowing to use more directly moderate deviations for self-normalized sums. The idea first appeared in formula (6.5) in arXiv:1105.2454v1 when some instruments can be endogenous. For reference and to avoid confusion with the STIV estimator, this estimator should be called C-STIV.
We reexamine the classical linear regression model when the model is subject to two types of uncertainty: (i) some of covariates are either missing or completely inaccessible, and (ii) the variance of the measurement error is undetermined and changing according to a mechanism unknown to the statistician. By following the recent theory of sublinear expectation, we propose to characterize such mean and variance uncertainty in the response variable by two specific nonlinear random variables, which encompass an infinite family of probability distributions for the response variable in the sense of (linear) classical probability theory. The approach enables a family of estimators under various loss functions for the regression parameter and the parameters related to model uncertainty. The consistency of the estimators is established under mild conditions on the data generation process. Three applications are introduced to assess the quality of the approach including a forecasting model for the S&P Index.
In this paper we consider the linear regression model $Y =S X+varepsilon $ with functional regressors and responses. We develop new inference tools to quantify deviations of the true slope $S$ from a hypothesized operator $S_0$ with respect to the Hilbert--Schmidt norm $| S- S_0|^2$, as well as the prediction error $mathbb{E} | S X - S_0 X |^2$. Our analysis is applicable to functional time series and based on asymptotically pivotal statistics. This makes it particularly user friendly, because it avoids the choice of tuning parameters inherent in long-run variance estimation or bootstrap of dependent data. We also discuss two sample problems as well as change point detection. Finite sample properties are investigated by means of a simulation study. Mathematically our approach is based on a sequential version of the popular spectral cut-off estimator $hat S_N$ for $S$. It is well-known that the $L^2$-minimax rates in the functional regression model, both in estimation and prediction, are substantially slower than $1/sqrt{N}$ (where $N$ denotes the sample size) and that standard estimators for $S$ do not converge weakly to non-degenerate limits. However, we demonstrate that simple plug-in estimators - such as $| hat S_N - S_0 |^2$ for $| S - S_0 |^2$ - are $sqrt{N}$-consistent and its sequenti
111 - Botond Szabo 2014
We consider the problem of constructing Bayesian based confidence sets for linear functionals in the inverse Gaussian white noise model. We work with a scale of Gaussian priors indexed by a regularity hyper-parameter and apply the data-driven (slightly modified) marginal likelihood empirical Bayes method for the choice of this hyper-parameter. We show by theory and simulations that the credible sets constructed by this method have sub-optimal behaviour in general. However, by assuming self-similarity the credible sets have rate-adaptive size and optimal coverage. As an application of these results we construct $L_{infty}$-credible bands for the true functional parameter with adaptive size and optimal coverage under self-similarity constraint.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا