ترغب بنشر مسار تعليمي؟ اضغط هنا

Conditional quantile sequential estimation for stochastic codes

71   0   0.0 ( 0 )
 نشر من قبل Aurelien Garivier
 تاريخ النشر 2015
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

We propose and analyze an algorithm for the sequential estimation of a conditional quantile in the context of real stochastic codes with vectorvalued inputs. Our algorithm is based on k-nearest neighbors smoothing within a Robbins-Monro estimator. We discuss the convergence of the algorithm under some conditions on the stochastic code. We provide non-asymptotic rates of convergence of the mean squared error and we discuss the tuning of the algorithms parameters.



قيم البحث

اقرأ أيضاً

165 - Karine Bertin 2013
In this paper we consider the problem of estimating $f$, the conditional density of $Y$ given $X$, by using an independent sample distributed as $(X,Y)$ in the multivariate setting. We consider the estimation of $f(x,.)$ where $x$ is a fixed point. W e define two different procedures of estimation, the first one using kernel rules, the second one inspired from projection methods. Both adapted estimators are tuned by using the Goldenshluger and Lepski methodology. After deriving lower bounds, we show that these procedures satisfy oracle inequalities and are optimal from the minimax point of view on anisotropic H{o}lder balls. Furthermore, our results allow us to measure precisely the influence of $mathrm{f}_X(x)$ on rates of convergence, where $mathrm{f}_X$ is the density of $X$. Finally, some simulations illustrate the good behavior of our tuned estimates in practice.
129 - Chuyun Ye , Keli Guo , Lixing Zhu 2020
In this paper, we apply doubly robust approach to estimate, when some covariates are given, the conditional average treatment effect under parametric, semiparametric and nonparametric structure of the nuisance propensity score and outcome regression models. We then conduct a systematic study on the asymptotic distributions of nine estimators with different combinations of estimated propensity score and outcome regressions. The study covers the asymptotic properties with all models correctly specified; with either propensity score or outcome regressions locally / globally misspecified; and with all models locally / globally misspecified. The asymptotic variances are compared and the asymptotic bias correction under model-misspecification is discussed. The phenomenon that the asymptotic variance, with model-misspecification, could sometimes be even smaller than that with all models correctly specified is explored. We also conduct a numerical study to examine the theoretical results.
Let $(X,Y)$ be a bivariate random vector. The estimation of a probability of the form $P(Yleq y mid X >t) $ is challenging when $t$ is large, and a fruitful approach consists in studying, if it exists, the limiting conditional distribution of the ran dom vector $(X,Y)$, suitably normalized, given that $X$ is large. There already exists a wide literature on bivariate models for which this limiting distribution exists. In this paper, a statistical analysis of this problem is done. Estimators of the limiting distribution (which is assumed to exist) and the normalizing functions are provided, as well as an estimator of the conditional quantile function when the conditioning event is extreme. Consistency of the estimators is proved and a functional central limit theorem for the estimator of the limiting distribution is obtained. The small sample behavior of the estimator of the conditional quantile function is illustrated through simulations.
This paper studies the estimation of the conditional density f (x, $times$) of Y i given X i = x, from the observation of an i.i.d. sample (X i , Y i) $in$ R d , i = 1,. .. , n. We assume that f depends only on r unknown components with typically r d . We provide an adaptive fully-nonparametric strategy based on kernel rules to estimate f. To select the bandwidth of our kernel rule, we propose a new fast iterative algorithm inspired by the Rodeo algorithm (Wasserman and Lafferty (2006)) to detect the sparsity structure of f. More precisely, in the minimax setting, our pointwise estimator, which is adaptive to both the regularity and the sparsity, achieves the quasi-optimal rate of convergence. Its computational complexity is only O(dn log n).
This paper studies the minimax rate of nonparametric conditional density estimation under a weighted absolute value loss function in a multivariate setting. We first demonstrate that conditional density estimation is impossible if one only requires t hat $p_{X|Z}$ is smooth in $x$ for all values of $z$. This motivates us to consider a sub-class of absolutely continuous distributions, restricting the conditional density $p_{X|Z}(x|z)$ to not only be Holder smooth in $x$, but also be total variation smooth in $z$. We propose a corresponding kernel-based estimator and prove that it achieves the minimax rate. We give some simple examples of densities satisfying our assumptions which imply that our results are not vacuous. Finally, we propose an estimator which achieves the minimax optimal rate adaptively, i.e., without the need to know the smoothness parameter values in advance. Crucially, both of our estimators (the adaptive and non-adaptive ones) impose no assumptions on the marginal density $p_Z$, and are not obtained as a ratio between two kernel smoothing estimators which may sound like a go to approach in this problem.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا