No Arabic abstract
The purpose of this paper is to estimate the intensity of a Poisson process $N$ by using thresholding rules. In this paper, the intensity, defined as the derivative of the mean measure of $N$ with respect to $ndx$ where $n$ is a fixed parameter, is assumed to be non-compactly supported. The estimator $tilde{f}_{n,gamma}$ based on random thresholds is proved to achieve the same performance as the oracle estimator up to a possible logarithmic term. Then, minimax properties of $tilde{f}_{n,gamma}$ on Besov spaces ${cal B}^{ensuremath alpha}_{p,q}$ are established. Under mild assumptions, we prove that $$sup_{fin B^{ensuremath alpha}_{p,q}cap ensuremath mathbb {L}_{infty}} ensuremath mathbb {E}(ensuremath | | tilde{f}_{n,gamma}-f| |_2^2)leq C(frac{log n}{n})^{frac{ensuremath alpha}{ensuremath alpha+{1/2}+({1/2}-frac{1}{p})_+}}$$ and the lower bound of the minimax risk for ${cal B}^{ensuremath alpha}_{p,q}cap ensuremath mathbb {L}_{infty}$ coincides with the previous upper bound up to the logarithmic term. This new result has two consequences. First, it establishes that the minimax rate of Besov spaces ${cal B}^{ensuremath alpha}_{p,q}$ with $pleq 2$ when non compactly supported functions are considered is the same as for compactly supported functions up to a logarithmic term. When $p>2$, the rate exponent, which depends on $p$, deteriorates when $p$ increases, which means that the support plays a harmful role in this case. Furthermore, $tilde{f}_{n,gamma}$ is adaptive minimax up to a logarithmic term.
In this paper, we deal with the problem of calibrating thresholding rules in the setting of Poisson intensity estimation. By using sharp concentration inequalities, oracle inequalities are derived and we establish the optimality of our estimate up to a logarithmic term. This result is proved under mild assumptions and we do not impose any condition on the support of the signal to be estimated. Our procedure is based on data-driven thresholds. As usual, they depend on a threshold parameter $gamma$ whose optimal value is hard to estimate from the data. Our main concern is to provide some theoretical and numerical results to handle this issue. In particular, we establish the existence of a minimal threshold parameter from the theoretical point of view: taking $gamma<1$ deteriorates oracle performances of our procedure. In the same spirit, we establish the existence of a maximal threshold parameter and our theoretical results point out the optimal range $gammain[1,12]$. Then, we lead a numerical study that shows that choosing $gamma$ larger than 1 but close to 1 is a fairly good choice. Finally, we compare our procedure with classical ones revealing the harmful role of the support of functions when estimated by classical procedures.
We consider a doubly stochastic Poisson process with stochastic intensity $lambda_t =n qleft(X_tright)$ where $X$ is a continuous It^o semimartingale and $n$ is an integer. Both processes are observed continuously over a fixed period $left[0,Tright]$. An estimation procedure is proposed in a non parametrical setting for the function $q$ on an interval $I$ where $X$ is sufficiently observed using a local polynomial estimator. A method to select the bandwidth in a non asymptotic framework is proposed, leading to an oracle inequality. If $m$ is the degree of the chosen polynomial, the accuracy of our estimator over the Holder class of order $beta$ is $n^{frac{-beta}{2beta+1}}$ if $m geq lfloor beta rfloor$ and it is optimal in the minimax sense if $m geq lfloor beta rfloor$. A parametrical test is also proposed to test if $q$ belongs to some parametrical family. Those results are applied to French temperature and electricity spot prices data where we infer the intensity of electricity spot spikes as a function of the temperature.
We are interested in estimating the location of what we call smooth change-point from $n$ independent observations of an inhomogeneous Poisson process. The smooth change-point is a transition of the intensity function of the process from one level to another which happens smoothly, but over such a small interval, that its length $delta_n$ is considered to be decreasing to $0$ as $nto+infty$. We show that if $delta_n$ goes to zero slower than $1/n$, our model is locally asymptotically normal (with a rather unusual rate $sqrt{delta_n/n}$), and the maximum likelihood and Bayesian estimators are consistent, asymptotically normal and asymptotically efficient. If, on the contrary, $delta_n$ goes to zero faster than $1/n$, our model is non-regular and behaves like a change-point model. More precisely, in this case we show that the Bayesian estimators are consistent, converge at rate $1/n$, have non-Gaussian limit distributions and are asymptotically efficient. All these results are obtained using the likelihood ratio analysis method of Ibragimov and Khasminskii, which equally yields the convergence of polynomial moments of the considered estimators. However, in order to study the maximum likelihood estimator in the case where $delta_n$ goes to zero faster than $1/n$, this method cannot be applied using the usual topologies of convergence in functional spaces. So, this study should go through the use of an alternative topology and will be considered in a future work.
In this paper, we use the class of Wasserstein metrics to study asymptotic properties of posterior distributions. Our first goal is to provide sufficient conditions for posterior consistency. In addition to the well-known Schwartzs Kullback--Leibler condition on the prior, the true distribution and most probability measures in the support of the prior are required to possess moments up to an order which is determined by the order of the Wasserstein metric. We further investigate convergence rates of the posterior distributions for which we need stronger moment conditions. The required tail conditions are sharp in the sense that the posterior distribution may be inconsistent or contract slowly to the true distribution without these conditions. Our study involves techniques that build on recent advances on Wasserstein convergence of empirical measures. We apply the results to density estimation with a Dirichlet process mixture prior and conduct a simulation study for further illustration.
We study the least squares estimator in the residual variance estimation context. We show that the mean squared differences of paired observations are asymptotically normally distributed. We further establish that, by regressing the mean squared differences of these paired observations on the squared distances between paired covariates via a simple least squares procedure, the resulting variance estimator is not only asymptotically normal and root-$n$ consistent, but also reaches the optimal bound in terms of estimation variance. We also demonstrate the advantage of the least squares estimator in comparison with existing methods in terms of the second order asymptotic properties.