Do you want to publish a course? Click here

Maximum Likelihood Estimation of Diffusions by Continuous Time Markov Chain

59   0   0.0 ( 0 )
 Added by Nguyen Nhu
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

In this paper we present a novel method for estimating the parameters of a parametric diffusion processes. Our approach is based on a closed-form Maximum Likelihood estimator for an approximating Continuous Time Markov Chain (CTMC) of the diffusion process. Unlike typical time discretization approaches, such as psuedo-likelihood approximations with Shoji-Ozaki or Kesslers method, the CTMC approximation introduces no time-discretization error during parameter estimation, and is thus well-suited for typical econometric situations with infrequently sampled data. Due to the structure of the CTMC, we are able to obtain closed-form approximations for the sample likelihood which hold for general univariate diffusions. Comparisons of the state-discretization approach with approximate MLE (time-discretization) and Exact MLE (when applicable) demonstrate favorable performance of the CMTC estimator. Simulated examples are provided in addition to real data experiments with FX rates and constant maturity interest rates.



rate research

Read More

We develop a Bayesian inference method for diffusions observed discretely and with noise, which is free of discretisation bias. Unlike existing unbiased inference methods, our method does not rely on exact simulation techniques. Instead, our method uses standard time-discretised approximations of diffusions, such as the Euler--Maruyama scheme. Our approach is based on particle marginal Metropolis--Hastings, a particle filter, randomised multilevel Monte Carlo, and importance sampling type correction of approximate Markov chain Monte Carlo. The resulting estimator leads to inference without a bias from the time-discretisation as the number of Markov chain iterations increases. We give convergence results and recommend allocations for algorithm inputs. Our method admits a straightforward parallelisation, and can be computationally efficient. The user-friendly approach is illustrated on three examples, where the underlying diffusion is an Ornstein--Uhlenbeck process, a geometric Brownian motion, and a 2d non-reversible Langevin equation.
Let X_1, ..., X_n be independent and identically distributed random vectors with a log-concave (Lebesgue) density f. We first prove that, with probability one, there exists a unique maximum likelihood estimator of f. The use of this estimator is attractive because, unlike kernel density estimation, the method is fully automatic, with no smoothing parameters to choose. Although the existence proof is non-constructive, we are able to reformulate the issue of computation in terms of a non-differentiable convex optimisation problem, and thus combine techniques of computational geometry with Shors r-algorithm to produce a sequence that converges to the maximum likelihood estimate. For the moderate or large sample sizes in our simulations, the maximum likelihood estimator is shown to provide an improvement in performance compared with kernel-based methods, even when we allow the use of a theoretical, optimal fixed bandwidth for the kernel estimator that would not be available in practice. We also present a real data clustering example, which shows that our methodology can be used in conjunction with the Expectation--Maximisation (EM) algorithm to fit finite mixtures of log-concave densities. An R version of the algorithm is available in the package LogConcDEAD -- Log-Concave Density Estimation in Arbitrary Dimensions.
A maximum likelihood methodology for a general class of models is presented, using an approximate Bayesian computation (ABC) approach. The typical target of ABC methods are models with intractable likelihoods, and we combine an ABC-MCMC sampler with so-called data cloning for maximum likelihood estimation. Accuracy of ABC methods relies on the use of a small threshold value for comparing simulations from the model and observed data. The proposed methodology shows how to use large threshold values, while the number of data-clones is increased to ease convergence towards an approximate maximum likelihood estimate. We show how to exploit the methodology to reduce the number of iterations of a standard ABC-MCMC algorithm and therefore reduce the computational effort, while obtaining reasonable point estimates. Simulation studies show the good performance of our approach on models with intractable likelihoods such as g-and-k distributions, stochastic differential equations and state-space models.
The random coefficients model $Y_i={beta_0}_i+{beta_1}_i {X_1}_i+{beta_2}_i {X_2}_i+ldots+{beta_d}_i {X_d}_i$, with $mathbf{X}_i$, $Y_i$, $mathbf{beta}_i$ i.i.d, and $mathbf{beta}_i$ independent of $X_i$ is often used to capture unobserved heterogeneity in a population. We propose a quasi-maximum likelihood method to estimate the joint density distribution of the random coefficient model. This method implicitly involves the inversion of the Radon transformation in order to reconstruct the joint distribution, and hence is an inverse problem. Nonparametric estimation for the joint density of $mathbf{beta}_i=({beta_0}_i,ldots, {beta_d}_i)$ based on kernel methods or Fourier inversion have been proposed in recent years. Most of these methods assume a heavy tailed design density $f_mathbf{X}$. To add stability to the solution, we apply regularization methods. We analyze the convergence of the method without assuming heavy tails for $f_mathbf{X}$ and illustrate performance by applying the method on simulated and real data. To add stability to the solution, we apply a Tikhonov-type regularization method.
We derive Laplace-approximated maximum likelihood estimators (GLAMLEs) of parameters in our Graph Generalized Linear Latent Variable Models. Then, we study the statistical properties of GLAMLEs when the number of nodes $n_V$ and the observed times of a graph denoted by $K$ diverge to infinity. Finally, we display the estimation results in a Monte Carlo simulation considering different numbers of latent variables. Besides, we make a comparison between Laplace and variational approximations for inference of our model.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا