Do you want to publish a course? Click here

Adaptive and non-adaptive estimation for degenerate diffusion processes

71   0   0.0 ( 0 )
 Added by Arnaud Gloter
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

We discuss parametric estimation of a degenerate diffusion system from time-discrete observations. The first component of the degenerate diffusion system has a parameter $theta_1$ in a non-degenerate diffusion coefficient and a parameter $theta_2$ in the drift term. The second component has a drift term parameterized by $theta_3$ and no diffusion term. Asymptotic normality is proved in three different situations for an adaptive estimator for $theta_3$ with some initial estimators for ($theta_1$ , $theta_2$), an adaptive one-step estimator for ($theta_1$ , $theta_2$ , $theta_3$) with some initial estimators for them, and a joint quasi-maximum likelihood estimator for ($theta_1$ , $theta_2$ , $theta_3$) without any initial estimator. Our estimators incorporate information of the increments of both components. Thanks to this construction, the asymptotic variance of the estimators for $theta_1$ is smaller than the standard one based only on the first component. The convergence of the estimators for $theta_3$ is much faster than the other parameters. The resulting asymptotic variance is smaller than that of an estimator only using the increments of the second component.



rate research

Read More

We study the estimation, in Lp-norm, of density functions defined on [0,1]^d. We construct a new family of kernel density estimators that do not suffer from the so-called boundary bias problem and we propose a data-driven procedure based on the Goldenshluger and Lepski approach that jointly selects a kernel and a bandwidth. We derive two estimators that satisfy oracle-type inequalities. They are also proved to be adaptive over a scale of anisotropic or isotropic Sobolev-Slobodetskii classes (which are particular cases of Besov or Sobolev classical classes). The main interest of the isotropic procedure is to obtain adaptive results without any restriction on the smoothness parameter.
163 - Karine Bertin 2013
In this paper we consider the problem of estimating $f$, the conditional density of $Y$ given $X$, by using an independent sample distributed as $(X,Y)$ in the multivariate setting. We consider the estimation of $f(x,.)$ where $x$ is a fixed point. We define two different procedures of estimation, the first one using kernel rules, the second one inspired from projection methods. Both adapted estimators are tuned by using the Goldenshluger and Lepski methodology. After deriving lower bounds, we show that these procedures satisfy oracle inequalities and are optimal from the minimax point of view on anisotropic H{o}lder balls. Furthermore, our results allow us to measure precisely the influence of $mathrm{f}_X(x)$ on rates of convergence, where $mathrm{f}_X$ is the density of $X$. Finally, some simulations illustrate the good behavior of our tuned estimates in practice.
We address the problem of adaptive minimax density estimation on $bR^d$ with $bL_p$--loss on the anisotropic Nikolskii classes. We fully characterize behavior of the minimax risk for different relationships between regularity parameters and norm indexes in definitions of the functional class and of the risk. In particular, we show that there are four different regimes with respect to the behavior of the minimax risk. We develop a single estimator which is (nearly) optimal in orderover the complete scale of the anisotropic Nikolskii classes. Our estimation procedure is based on a data-driven selection of an estimator from a fixed family of kernel estimators.
We aim at estimating the invariant density associated to a stochastic differential equation with jumps in low dimension, which is for $d=1$ and $d=2$. We consider a class of jump diffusion processes whose invariant density belongs to some Holder space. Firstly, in dimension one, we show that the kernel density estimator achieves the convergence rate $frac{1}{T}$, which is the optimal rate in the absence of jumps. This improves the convergence rate obtained in [Amorino, Gloter (2021)], which depends on the Blumenthal-Getoor index for $d=1$ and is equal to $frac{log T}{T}$ for $d=2$. Secondly, we show that is not possible to find an estimator with faster rates of estimation. Indeed, we get some lower bounds with the same rates ${frac{1}{T},frac{log T}{T}}$ in the mono and bi-dimensional cases, respectively. Finally, we obtain the asymptotic normality of the estimator in the one-dimensional case.
This paper studies the estimation of the conditional density f (x, $times$) of Y i given X i = x, from the observation of an i.i.d. sample (X i , Y i) $in$ R d , i = 1,. .. , n. We assume that f depends only on r unknown components with typically r d. We provide an adaptive fully-nonparametric strategy based on kernel rules to estimate f. To select the bandwidth of our kernel rule, we propose a new fast iterative algorithm inspired by the Rodeo algorithm (Wasserman and Lafferty (2006)) to detect the sparsity structure of f. More precisely, in the minimax setting, our pointwise estimator, which is adaptive to both the regularity and the sparsity, achieves the quasi-optimal rate of convergence. Its computational complexity is only O(dn log n).
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا