Do you want to publish a course? Click here

On the Almost Sure Central Limit Theorem for Vector Martingales: Convergence of Moments and Statistical Applications

197   0   0.0 ( 0 )
 Added by Guy Fayolle
 Publication date 2008
  fields
and research's language is English
 Authors Bernard Bercu




Ask ChatGPT about the research

We investigate the almost sure asymptotic properties of vector martingale transforms. Assuming some appropriate regularity conditions both on the increasing process and on the moments of the martingale, we prove that normalized moments of any even order converge in the almost sure cental limit theorem for martingales. A conjecture about almost sure upper bounds under wider hypotheses is formulated. The theoretical results are supported by examples borrowed from statistical applications, including linear autoregressive models and branching processes with immigration, for which new asymptotic properties are established on estimation and prediction errors.



rate research

Read More

229 - E. Carlen , A. Soffer 2011
We prove for the rescaled convolution map $fto fcircledast f$ propagation of polynomial, exponential and gaussian localization. The gaussian localization is then used to prove an optimal bound on the rate of entropy production by this map. As an application we prove the convergence of the CLT to be at the optimal rate $1/sqrt{n}$ in the entropy (and $L^1$) sense, for distributions with finite 4th moment.
We give a new proof of the classical Central Limit Theorem, in the Mallows ($L^r$-Wasserstein) distance. Our proof is elementary in the sense that it does not require complex analysis, but rather makes use of a simple subadditive inequality related to this metric. The key is to analyse the case where equality holds. We provide some results concerning rates of convergence. We also consider convergence to stable distributions, and obtain a bound on the rate of such convergence.
We establish a central limit theorem for (a sequence of) multivariate martingales which dimension potentially grows with the length $n$ of the martingale. A consequence of the results are Gaussian couplings and a multiplier bootstrap for the maximum of a multivariate martingale whose dimensionality $d$ can be as large as $e^{n^c}$ for some $c>0$. We also develop new anti-concentration bounds for the maximum component of a high-dimensional Gaussian vector, which we believe is of independent interest. The results are applicable to a variety of settings. We fully develop its use to the estimation of context tree models (or variable length Markov chains) for discrete stationary time series. Specifically, we provide a bootstrap-based rule to tune several regularization parameters in a theoretically valid Lepski-type method. Such bootstrap-based approach accounts for the correlation structure and leads to potentially smaller penalty choices, which in turn improve the estimation of the transition probabilities.
Marcinkiewicz strong law of large numbers, ${n^{-frac1p}}sum_{k=1}^{n} (d_{k}- d)rightarrow 0 $ almost surely with $pin(1,2)$, are developed for products $d_k=prod_{r=1}^s x_k^{(r)}$, where the $x_k^{(r)} = sum_{l=-infty}^{infty}c_{k-l}^{(r)}xi_l^{(r)}$ are two-sided linear process with coefficients ${c_l^{(r)}}_{lin mathbb{Z}}$ and i.i.d. zero-mean innovations ${xi_l^{(r)}}_{lin mathbb{Z}}$. The decay of the coefficients $c_l^{(r)}$ as $|l|toinfty$, can be slow enough for ${x_k^{(r)}}$ to have long memory while ${d_k}$ can have heavy tails. The long-range dependence and heavy tails for ${d_k}$ are handled simultaneously and a decoupling property shows the convergence rate is dictated by the worst of long-range dependence and heavy tails, but not their combination. The results provide a means to estimate how much (if any) long-range dependence and heavy tails a sequential data set possesses, which is done for real financial data. All of the stocks we considered had some degree of heavy tails. The majority also had long-range dependence. The Marcinkiewicz strong law of large numbers is also extended to the multivariate linear process case.
Under the Kolmogorov--Smirnov metric, an upper bound on the rate of convergence to the Gaussian distribution is obtained for linear statistics of the matrix ensembles in the case of the Gaussian, Laguerre, and Jacobi weights. The main lemma gives an estimate for the characteristic functions of the linear statistics; this estimate is uniform over the growing interval. The proof of the lemma relies on the Riemann--Hilbert approach.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا