Do you want to publish a course? Click here

Sufficient and insufficient conditions for the stochastic convergence of Ces`{a}ro means

239   0   0.0 ( 0 )
 Added by Aur\\'elien Bibaut
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

We study the stochastic convergence of the Ces`{a}ro mean of a sequence of random variables. These arise naturally in statistical problems that have a sequential component, where the sequence of random variables is typically derived from a sequence of estimators computed on data. We show that establishing a rate of convergence in probability for a sequence is not sufficient in general to establish a rate in probability for its Ces`{a}ro mean. We also present several sets of conditions on the sequence of random variables that are sufficient to guarantee a rate of convergence for its Ces`{a}ro mean. We identify common settings in which these sets of conditions hold.

rate research

Read More

In this contribution we are interested in proving that a given observation-driven model is identifiable. In the case of a GARCH(p, q) model, a simple sufficient condition has been established in [1] for showing the consistency of the quasi-maximum likelihood estimator. It turns out that this condition applies for a much larger class of observation-driven models, that we call the class of linearly observation-driven models. This class includes standard integer valued observation-driven time series, such as the log-linear Poisson GARCH or the NBIN-GARCH models.
94 - Hisayuki Hara 2007
In this article we provide some nonnegative and positive estimators of the mean squared errors(MSEs) for shrinkage estimators of multivariate normal means. Proposed estimators are shown to improve on the uniformly minimum variance unbiased estimator(UMVUE) under a quadratic loss criterion. A similar improvement is also obtained for the estimators of the MSE matrices for shrinkage estimators. We also apply the proposed estimators of the MSE matrix to form confidence sets centered at shrinkage estimators and show their usefulness through numerical experiments.
For estimating a lower bounded location or mean parameter for a symmetric and logconcave density, we investigate the frequentist performance of the $100(1-alpha)%$ Bayesian HPD credible set associated with priors which are truncations of flat priors onto the restricted parameter space. Various new properties are obtained. Namely, we identify precisely where the minimum coverage is obtained and we show that this minimum coverage is bounded between $1-frac{3alpha}{2}$ and $1-frac{3alpha}{2}+frac{alpha^2}{1+alpha}$; with the lower bound $1-frac{3alpha}{2}$ improving (for $alpha leq 1/3$) on the previously established ([9]; [8]) lower bound $frac{1-alpha}{1+alpha}$. Several illustrative examples are given.
We study the convergence properties of a collapsed Gibbs sampler for Bayesian vector autoregressions with predictors, or exogenous variables. The Markov chain generated by our algorithm is shown to be geometrically ergodic regardless of whether the number of observations in the underlying vector autoregression is small or large in comparison to the order and dimension of it. In a convergence complexity analysis, we also give conditions for when the geometric ergodicity is asymptotically stable as the number of observations tends to infinity. Specifically, the geometric convergence rate is shown to be bounded away from unity asymptotically, either almost surely or with probability tending to one, depending on what is assumed about the data generating process. This result is one of the first of its kind for practically relevant Markov chain Monte Carlo algorithms. Our convergence results hold under close to arbitrary model misspecification.
In functional linear regression, the slope ``parameter is a function. Therefore, in a nonparametric context, it is determined by an infinite number of unknowns. Its estimation involves solving an ill-posed problem and has points of contact with a range of methodologies, including statistical smoothing and deconvolution. The standard approach to estimating the slope function is based explicitly on functional principal components analysis and, consequently, on spectral decomposition in terms of eigenvalues and eigenfunctions. We discuss this approach in detail and show that in certain circumstances, optimal convergence rates are achieved by the PCA technique. An alternative approach based on quadratic regularisation is suggested and shown to have advantages from some points of view.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا