ترغب بنشر مسار تعليمي؟ اضغط هنا

Empirical and multiplier bootstraps for suprema of empirical processes of increasing complexity, and related Gaussian couplings

97   0   0.0 ( 0 )
 نشر من قبل Denis Chetverikov
 تاريخ النشر 2015
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

We derive strong approximations to the supremum of the non-centered empirical process indexed by a possibly unbounded VC-type class of functions by the suprema of the Gaussian and bootstrap processes. The bounds of these approximations are non-asymptotic, which allows us to work with classes of functions whose complexity increases with the sample size. The construction of couplings is not of the Hungarian type and is instead based on the Slepian-Stein methods and Gaussian comparison inequalities. The increasing complexity of classes of functions and non-centrality of the processes make the results useful for applications in modern nonparametric statistics (Gin{e} and Nickl, 2015), in particular allowing us to study the power properties of nonparametric tests using Gaussian and bootstrap approximations.


قيم البحث

اقرأ أيضاً

183 - Salim Bouzebda 2011
We provide the strong approximation of empirical copula processes by a Gaussian process. In addition we establish a strong approximation of the smoothed empirical copula processes and a law of iterated logarithm.
99 - Sheng Wu , Yi Zhang , Jun Zhao 2018
In this article, by using composite asymmetric least squares (CALS) and empirical likelihood, we propose a two-step procedure to estimate the conditional value at risk (VaR) and conditional expected shortfall (ES) for the GARCH series. First, we perf orm asymmetric least square regressions at several significance levels to model the volatility structure and separate it from the innovation process in the GARCH model. Note that expectile can serve as a bond to make up the gap from VaR estimation to ES estimation because there exists a bijective mapping from expectiles to specific quantile, and ES can be induced by expectile through a simple formula. Then, we introduce the empirical likelihood method to determine the relation above; this method is data-driven and distribution-free. Theoretical studies guarantee the asymptotic properties, such as consistency and the asymptotic normal distribution of the estimator obtained by our proposed method. A Monte Carlo experiment and an empirical application are conducted to evaluate the performance of the proposed method. The results indicate that our proposed estimation method is competitive with some alternative existing tail-related risk estimation methods.
173 - Salim Bouzebda 2009
The purpose of this note is to provide an approximation for the generalized bootstrapped empirical process achieving the rate in Kolmos et al. (1975). The proof is based on much the same arguments as in Horvath et al. (2000). As a consequence, we est ablish an approximation of the bootstrapped kernel-type density estimator
This paper explores a class of empirical Bayes methods for level-dependent threshold selection in wavelet shrinkage. The prior considered for each wavelet coefficient is a mixture of an atom of probability at zero and a heavy-tailed density. The mixi ng weight, or sparsity parameter, for each level of the transform is chosen by marginal maximum likelihood. If estimation is carried out using the posterior median, this is a random thresholding procedure; the estimation can also be carried out using other thresholding rules with the same threshold. Details of the calculations needed for implementing the procedure are included. In practice, the estimates are quick to compute and there is software available. Simulations on the standard model functions show excellent performance, and applications to data drawn from various fields of application are used to explore the practical performance of the approach. By using a general result on the risk of the corresponding marginal maximum likelihood approach for a single sequence, overall bounds on the risk of the method are found subject to membership of the unknown function in one of a wide range of Besov classes, covering also the case of f of bounded variation. The rates obtained are optimal for any value of the parameter p in (0,infty], simultaneously for a wide range of loss functions, each dominating the L_q norm of the sigmath derivative, with sigmage0 and 0<qle2.
In the sparse normal means model, coverage of adaptive Bayesian posterior credible sets associated to spike and slab prior distributions is considered. The key sparsity hyperparameter is calibrated via marginal maximum likelihood empirical Bayes. Fir st, adaptive posterior contraction rates are derived with respect to $d_q$--type--distances for $qleq 2$. Next, under a type of so-called excessive-bias conditions, credible sets are constructed that have coverage of the true parameter at prescribed $1-alpha$ confidence level and at the same time are of optimal diameter. We also prove that the previous conditions cannot be significantly weakened from the minimax perspective.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا