Do you want to publish a course? Click here

Estimating a class of diffusions from discrete observations via approximate maximum likelihood method

70   0   0.0 ( 0 )
 Added by Miljenko Huzak
 Publication date 2016
and research's language is English




Ask ChatGPT about the research

An approximate maximum likelihood method of estimation of diffusion parameters $(vartheta,sigma)$ based on discrete observations of a diffusion $X$ along fixed time-interval $[0,T]$ and Euler approximation of integrals is analyzed. We assume that $X$ satisfies a SDE of form $dX_t =mu (X_t ,vartheta ), dt+sqrt{sigma} b(X_t ), dW_t$, with non-random initial condition. SDE is nonlinear in $vartheta$ generally. Based on assumption that maximum likelihood estimator $hat{vartheta}_T$ of the drift parameter based on continuous observation of a path over $[0,T]$ exists we prove that measurable estimator $(hat{vartheta}_{n,T},hat{sigma}_{n,T})$ of the parameters obtained from discrete observations of $X$ along $[0,T]$ by maximization of the approximate log-likelihood function exists, $hat{sigma}_{n,T}$ being consistent and asymptotically normal, and $hat{vartheta}_{n,T}-hat{vartheta}_T$ tends to zero with rate $sqrt{delta}_{n,T}$ in probability when $delta_{n,T} =max_{0leq i<n}(t_{i+1}-t_i )$ tends to zero with $T$ fixed. The same holds in case of an ergodic diffusion when $T$ goes to infinity in a way that $Tdelta_n$ goes to zero with equidistant sampling, and we applied these to show consistency and asymptotical normality of $hat{vartheta}_{n,T}$, $hat{sigma}_{n,T}$ and asymptotic efficiency of $hat{vartheta}_{n,T}$ in this case.



rate research

Read More

Estimating the matrix of connections probabilities is one of the key questions when studying sparse networks. In this work, we consider networks generated under the sparse graphon model and the in-homogeneous random graph model with missing observations. Using the Stochastic Block Model as a parametric proxy, we bound the risk of the maximum likelihood estimator of network connections probabilities , and show that it is minimax optimal. When risk is measured in Frobenius norm, no estimator running in polynomial time has been shown to attain the minimax optimal rate of convergence for this problem. Thus, maximum likelihood estimation is of particular interest as computationally efficient approximations to it have been proposed in the literature and are often used in practice.
With a view to statistical inference for discretely observed diffusion models, we propose simple methods of simulating diffusion bridges, approximately and exactly. Diffusion bridge simulation plays a fundamental role in likelihood and Bayesian inference for diffusion processes. First a simple method of simulating approximate diffusion bridges is proposed and studied. Then these approximate bridges are used as proposal for an easily implemented Metropolis-Hastings algorithm that produces exact diffusion bridges. The new method utilizes time-reversibility properties of one-dimensional diffusions and is applicable to all one-dimensional diffusion processes with finite speed-measure. One advantage of the new approach is that simple simulation methods like the Milstein scheme can be applied to bridge simulation. Another advantage over previous bridge simulation methods is that the proposed method works well for diffusion bridges in long intervals because the computational complexity of the method is linear in the length of the interval. For $rho$-mixing diffusions the approximate method is shown to be particularly accurate for long time intervals. In a simulation study, we investigate the accuracy and efficiency of the approximate method and compare it to exact simulation methods. In the study, our method provides a very good approximation to the distribution of a diffusion bridge for bridges that are likely to occur in applications to statistical inference. To illustrate the usefulness of the new method, we present an EM-algorithm for a discretely observed diffusion process.
We find limiting distributions of the nonparametric maximum likelihood estimator (MLE) of a log-concave density, that is, a density of the form $f_0=expvarphi_0$ where $varphi_0$ is a concave function on $mathbb{R}$. The pointwise limiting distributions depend on the second and third derivatives at 0 of $H_k$, the lower invelope of an integrated Brownian motion process minus a drift term depending on the number of vanishing derivatives of $varphi_0=log f_0$ at the point of interest. We also establish the limiting distribution of the resulting estimator of the mode $M(f_0)$ and establish a new local asymptotic minimax lower bound which shows the optimality of our mode estimator in terms of both rate of convergence and dependence of constants on population values.
We present theoretical properties of the log-concave maximum likelihood estimator of a density based on an independent and identically distributed sample in $mathbb{R}^d$. Our study covers both the case where the true underlying density is log-concave, and where this model is misspecified. We begin by showing that for a sequence of log-concave densities, convergence in distribution implies much stronger types of convergence -- in particular, it implies convergence in Hellinger distance and even in certain exponentially weighted total variation norms. In our main result, we prove the existence and uniqueness of a log-concave density that minimises the Kullback--Leibler divergence from the true density over the class all log-concave densities, and also show that the log-concave maximum likelihood estimator converges almost surely in these exponentially weighted total variation norms to this minimiser. In the case of a correctly specified model, this demonstrates a strong type of consistency for the estimator; in a misspecified model, it shows that the estimator converges to the log-concave density that is closest in the Kullback--Leibler sense to the true density.
79 - Jesse Goodman 2020
The saddlepoint approximation gives an approximation to the density of a random variable in terms of its moment generating function. When the underlying random variable is itself the sum of $n$ unobserved i.i.d. terms, the basic classical result is that the relative error in the density is of order $1/n$. If instead the approximation is interpreted as a likelihood and maximised as a function of model parameters, the result is an approximation to the maximum likelihood estimate (MLE) that can be much faster to compute than the true MLE. This paper proves the analogous basic result for the approximation error between the saddlepoint MLE and the true MLE: subject to certain explicit identifiability conditions, the error has asymptotic size $O(1/n^2)$ for some parameters, and $O(1/n^{3/2})$ or $O(1/n)$ for others. In all three cases, the approximation errors are asymptotically negligible compared to the inferential uncertainty. The proof is based on a factorisation of the saddlepoint likelihood into an exact and approximate term, along with an analysis of the approximation error in the gradient of the log-likelihood. This factorisation also gives insight into alternatives to the saddlepoint approximation, including a new and simpler saddlepoint approximation, for which we derive analogous error bounds. As a corollary of our results, we also obtain the asymptotic size of the MLE error approximation when the saddlepoint approximation is replaced by the normal approximation.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا