ترغب بنشر مسار تعليمي؟ اضغط هنا

Asymptotic inference in some heteroscedastic regression models with long memory design and errors

188   0   0.0 ( 0 )
 نشر من قبل Hira L. Koul
 تاريخ النشر 2008
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

This paper discusses asymptotic distributions of various estimators of the underlying parameters in some regression models with long memory (LM) Gaussian design and nonparametric heteroscedastic LM moving average errors. In the simple linear regression model, the first-order asymptotic distribution of the least square estimator of the slope parameter is observed to be degenerate. However, in the second order, this estimator is $n^{1/2}$-consistent and asymptotically normal for $h+H<3/2$; nonnormal otherwise, where $h$ and $H$ are LM parameters of design and error processes, respectively. The finite-dimensional asymptotic distributions of a class of kernel type estimators of the conditional variance function $sigma^2(x)$ in a more general heteroscedastic regression model are found to be normal whenever $H<(1+h)/2$, and non-normal otherwise. In addition, in this general model, $log(n)$-consistency of the local Whittle estimator of $H$ based on pseudo residuals and consistency of a cross validation type estimator of $sigma^2(x)$ are established. All of these findings are then used to propose a lack-of-fit test of a parametric regression model, with an application to some currency exchange rate data which exhibit LM.



قيم البحث

اقرأ أيضاً

For the class of Gauss-Markov processes we study the problem of asymptotic equivalence of the nonparametric regression model with errors given by the increments of the process and the continuous time model, where a whole path of a sum of a determinis tic signal and the Gauss-Markov process can be observed. In particular we provide sufficient conditions such that asymptotic equivalence of the two models holds for functions from a given class, and we verify these for the special cases of Sobolev ellipsoids and Holder classes with smoothness index $> 1/2$ under mild assumptions on the Gauss-Markov process at hand. To derive these results, we develop an explicit characterization of the reproducing kernel Hilbert space associated with the Gauss-Markov process, that hinges on a characterization of such processes by a property of the corresponding covariance kernel introduced by Doob. In order to demonstrate that the given assumptions on the Gauss-Markov process are in some sense sharp we also show that asymptotic equivalence fails to hold for the special case of Brownian bridge. Our results demonstrate that the well-known asymptotic equivalence of the Gaussian white noise model and the nonparametric regression model with independent standard normal distributed errors can be extended to a broad class of models with dependent data.
127 - Sylvain Arlot 2010
We consider the problem of choosing between several models in least-squares regression with heteroscedastic data. We prove that any penalization procedure is suboptimal when the penalty is a function of the dimension of the model, at least for some t ypical heteroscedastic model selection problems. In particular, Mallows Cp is suboptimal in this framework. On the contrary, optimal model selection is possible with data-driven penalties such as resampling or $V$-fold penalties. Therefore, it is worth estimating the shape of the penalty from data, even at the price of a higher computational cost. Simulation experiments illustrate the existence of a trade-off between statistical accuracy and computational complexity. As a conclusion, we sketch some rules for choosing a penalty in least-squares regression, depending on what is known about possible variations of the noise-level.
We deal with a general class of extreme-value regression models introduced by Barreto- Souza and Vasconcellos (2011). Our goal is to derive an adjusted likelihood ratio statistic that is approximately distributed as c{hi}2 with a high degree of accur acy. Although the adjusted statistic requires more computational effort than its unadjusted counterpart, it is shown that the adjustment term has a simple compact form that can be easily implemented in standard statistical software. Further, we compare the finite sample performance of the three classical tests (likelihood ratio, Wald, and score), the gradient test that has been recently proposed by Terrell (2002), and the adjusted likelihood ratio test obtained in this paper. Our simulations favor the latter. Applications of our results are presented. Key words: Extreme-value regression; Gradient test; Gumbel distribution; Likelihood ratio test; Nonlinear models; Score test; Small-sample adjustments; Wald test.
We study the asymptotic properties of bridge estimators in sparse, high-dimensional, linear regression models when the number of covariates may increase to infinity with the sample size. We are particularly interested in the use of bridge estimators to distinguish between covariates whose coefficients are zero and covariates whose coefficients are nonzero. We show that under appropriate conditions, bridge estimators correctly select covariates with nonzero coefficients with probability converging to one and that the estimators of nonzero coefficients have the same asymptotic distribution that they would have if the zero coefficients were known in advance. Thus, bridge estimators have an oracle property in the sense of Fan and Li [J. Amer. Statist. Assoc. 96 (2001) 1348--1360] and Fan and Peng [Ann. Statist. 32 (2004) 928--961]. In general, the oracle property holds only if the number of covariates is smaller than the sample size. However, under a partial orthogonality condition in which the covariates of the zero coefficients are uncorrelated or weakly correlated with the covariates of nonzero coefficients, we show that marginal bridge estimators can correctly distinguish between covariates with nonzero and zero coefficients with probability converging to one even when the number of covariates is greater than the sample size.
This paper deals with the estimation of hidden periodicities in a non-linear regression model with stationary noise displaying cyclical dependence. Consistency and asymptotic normality are established for the least-squares estimates.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا