ترغب بنشر مسار تعليمي؟ اضغط هنا

Linear regression under model uncertainty

90   0   0.0 ( 0 )
 نشر من قبل Shuzhen Yang
 تاريخ النشر 2021
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

We reexamine the classical linear regression model when the model is subject to two types of uncertainty: (i) some of covariates are either missing or completely inaccessible, and (ii) the variance of the measurement error is undetermined and changing according to a mechanism unknown to the statistician. By following the recent theory of sublinear expectation, we propose to characterize such mean and variance uncertainty in the response variable by two specific nonlinear random variables, which encompass an infinite family of probability distributions for the response variable in the sense of (linear) classical probability theory. The approach enables a family of estimators under various loss functions for the regression parameter and the parameters related to model uncertainty. The consistency of the estimators is established under mild conditions on the data generation process. Three applications are introduced to assess the quality of the approach including a forecasting model for the S&P Index.

قيم البحث

اقرأ أيضاً

We introduce and study a local linear nonparametric regression estimator for censorship model. The main goal of this paper is, to establish the uniform almost sure consistency result with rate over a compact set for the new estimate. To support our t heoretical result, a simulation study has been done to make comparison with the classical regression estimator.
In a regression setting with response vector $mathbf{y} in mathbb{R}^n$ and given regressor vectors $mathbf{x}_1,ldots,mathbf{x}_p in mathbb{R}^n$, a typical question is to what extent $mathbf{y}$ is related to these regressor vectors, specifically, how well can $mathbf{y}$ be approximated by a linear combination of them. Classical methods for this question are based on statistical models for the conditional distribution of $mathbf{y}$, given the regressor vectors $mathbf{x}_j$. Davies and Duembgen (2020) proposed a model-free approach in which all observation vectors $mathbf{y}$ and $mathbf{x}_j$ are viewed as fixed, and the quality of the least squares fit of $mathbf{y}$ is quantified by comparing it with the least squares fit resulting from $p$ independent white noise regressor vectors. The purpose of the present note is to explain in a general context why the model-based and model-free approach yield the same p-values, although the interpretation of the latter is different under the two paradigms.
In this paper, we built a new nonparametric regression estimator with the local linear method by using the mean squared relative error as a loss function when the data are subject to random right censoring. We establish the uniform almost sure consis tency with rate over a compact set of the proposed estimator. Some simulations are given to show the asymptotic behavior of the estimate in different cases.
Recently, the well known Liu estimator (Liu, 1993) is attracted researchers attention in regression parameter estimation for an ill conditioned linear model. It is also argued that imposing sub-space hypothesis restriction on parameters improves esti mation by shrinking toward non-sample information. Chang (2015) proposed the almost unbiased Liu estimator (AULE) in the binary logistic regression. In this article, some improved unbiased Liu type estimators, namely, restricted AULE, preliminary test AULE, Stein-type shrinkage AULE and its positive part for estimating the regression parameters in the binary logistic regression model are proposed based on the work Chang (2015). The performances of the newly defined estimators are analysed through some numerical results. A real data example is also provided to support the findings.
In functional linear regression, the slope ``parameter is a function. Therefore, in a nonparametric context, it is determined by an infinite number of unknowns. Its estimation involves solving an ill-posed problem and has points of contact with a ran ge of methodologies, including statistical smoothing and deconvolution. The standard approach to estimating the slope function is based explicitly on functional principal components analysis and, consequently, on spectral decomposition in terms of eigenvalues and eigenfunctions. We discuss this approach in detail and show that in certain circumstances, optimal convergence rates are achieved by the PCA technique. An alternative approach based on quadratic regularisation is suggested and shown to have advantages from some points of view.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا