ترغب بنشر مسار تعليمي؟ اضغط هنا

Second Moment Estimator for An AR(1) Model Driven by A Long Memory Gaussian Noise

131   0   0.0 ( 0 )
 نشر من قبل Li Tian
 تاريخ النشر 2020
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

In this paper, we consider an inference problem for the first order autoregressive process driven by a long memory stationary Gaussian process. Suppose that the covariance function of the noise can be expressed as $abs{k}^{2H-2}$ times a function slowly varying at infinity. The fractional Gaussian noise and the fractional ARIMA model and some others Gaussian noise are special examples that satisfy this assumption. We propose a second moment estimator and prove the strong consistency and give the asymptotic distribution. Moreover, when the limit distribution is Gaussian, we give the upper Berry-Esseen bound by means of Fourth moment theorem.



قيم البحث

اقرأ أيضاً

228 - Chunhao Cai 2017
This paper deals with the maximum likelihood estimator for the mean-reverting parameter of a first order autoregressive models with exogenous variables, which are stationary Gaussian noises (Colored noise). Using the method of the Laplace transform, both the asymptotic properties and the asymptotic design problem of the maximum likelihood estimator are investigated. The numerical simulation results confirm the theoretical analysis and show that the proposed maximum likelihood estimator performs well in finite sample.
We study the parameter estimation problem of Vasicek Model driven by sub-fractional Brownian processes from discrete observations, and let {S_t^H,t>=0} denote a sub-fractional Brownian motion whose Hurst parameter 1/2<H<1 . The studies are as follows : firstly, two unknown parameters in the model are estimated by the least squares method. Secondly, the strong consistency and the asymptotic distribution of the estimators are studied respectively. Finally, our estimators are validated by numerical simulation.
We consider stationary processes with long memory which are non-Gaussian and represented as Hermite polynomials of a Gaussian process. We focus on the corresponding wavelet coefficients and study the asymptotic behavior of the sum of their squares si nce this sum is often used for estimating the long-memory parameter. We show that the limit is not Gaussian but can be expressed using the non-Gaussian Rosenblatt process defined as a Wiener It^o integral of order 2. This happens even if the original process is defined through a Hermite polynomial of order higher than 2.
The goal of this paper is to show that a single robust estimator of the mean of a multivariate Gaussian distribution can enjoy five desirable properties. First, it is computationally tractable in the sense that it can be computed in a time which is a t most polynomial in dimension, sample size and the logarithm of the inverse of the contamination rate. Second, it is equivariant by translations, uniform scaling and orthogonal transformations. Third, it has a high breakdown point equal to $0.5$, and a nearly-minimax-rate-breakdown point approximately equal to $0.28$. Fourth, it is minimax rate optimal, up to a logarithmic factor, when data consists of independent observations corrupted by adversarially chosen outliers. Fifth, it is asymptotically efficient when the rate of contamination tends to zero. The estimator is obtained by an iterative reweighting approach. Each sample point is assigned a weight that is iteratively updated by solving a convex optimization problem. We also establish a dimension-free non-asymptotic risk bound for the expected error of the proposed estimator. It is the first result of this kind in the literature and involves only the effective rank of the covariance matrix. Finally, we show that the obtained results can be extended to sub-Gaussian distributions, as well as to the cases of unknown rate of contamination or unknown covariance matrix.
77 - Yufei Yi , Matey Neykov 2021
The Chebyshev or $ell_{infty}$ estimator is an unconventional alternative to the ordinary least squares in solving linear regressions. It is defined as the minimizer of the $ell_{infty}$ objective function begin{align*} hat{boldsymbol{beta}} := arg min_{boldsymbol{beta}} |boldsymbol{Y} - mathbf{X}boldsymbol{beta}|_{infty}. end{align*} The asymptotic distribution of the Chebyshev estimator under fixed number of covariates were recently studied (Knight, 2020), yet finite sample guarantees and generalizations to high-dimensional settings remain open. In this paper, we develop non-asymptotic upper bounds on the estimation error $|hat{boldsymbol{beta}}-boldsymbol{beta}^*|_2$ for a Chebyshev estimator $hat{boldsymbol{beta}}$, in a regression setting with uniformly distributed noise $varepsilon_isim U([-a,a])$ where $a$ is either known or unknown. With relatively mild assumptions on the (random) design matrix $mathbf{X}$, we can bound the error rate by $frac{C_p}{n}$ with high probability, for some constant $C_p$ depending on the dimension $p$ and the law of the design. Furthermore, we illustrate that there exist designs for which the Chebyshev estimator is (nearly) minimax optimal. In addition we show that Chebyshevs LASSO has advantages over the regular LASSO in high dimensional situations, provided that the noise is uniform. Specifically, we argue that it achieves a much faster rate of estimation under certain assumptions on the growth rate of the sparsity level and the ambient dimension with respect to the sample size.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا