Do you want to publish a course? Click here

A penalized simulated maximum likelihood approach in parameter estimation for stochastic differential equations

266   0   0.0 ( 0 )
 Added by Libo Sun
 Publication date 2013
and research's language is English




Ask ChatGPT about the research

We consider the problem of estimating parameters of stochastic differential equations (SDEs) with discrete-time observations that are either completely or partially observed. The transition density between two observations is generally unknown. We propose an importance sampling approach with an auxiliary parameter when the transition density is unknown. We embed the auxiliary importance sampler in a penalized maximum likelihood framework which produces more accurate and computationally efficient parameter estimates. Simulation studies in three different models illustrate promising improvements of the new penalized simulated maximum likelihood method. The new procedure is designed for the challenging case when some state variables are unobserved and moreover, observed states are sparse over time, which commonly arises in ecological studies. We apply this new approach to two epidemics of chronic wasting disease in mule deer.



rate research

Read More

The random coefficients model $Y_i={beta_0}_i+{beta_1}_i {X_1}_i+{beta_2}_i {X_2}_i+ldots+{beta_d}_i {X_d}_i$, with $mathbf{X}_i$, $Y_i$, $mathbf{beta}_i$ i.i.d, and $mathbf{beta}_i$ independent of $X_i$ is often used to capture unobserved heterogeneity in a population. We propose a quasi-maximum likelihood method to estimate the joint density distribution of the random coefficient model. This method implicitly involves the inversion of the Radon transformation in order to reconstruct the joint distribution, and hence is an inverse problem. Nonparametric estimation for the joint density of $mathbf{beta}_i=({beta_0}_i,ldots, {beta_d}_i)$ based on kernel methods or Fourier inversion have been proposed in recent years. Most of these methods assume a heavy tailed design density $f_mathbf{X}$. To add stability to the solution, we apply regularization methods. We analyze the convergence of the method without assuming heavy tails for $f_mathbf{X}$ and illustrate performance by applying the method on simulated and real data. To add stability to the solution, we apply a Tikhonov-type regularization method.
329 - Xin Gao , Daniel Q. Pu , Yuehua Wu 2009
In a Gaussian graphical model, the conditional independence between two variables are characterized by the corresponding zero entries in the inverse covariance matrix. Maximum likelihood method using the smoothly clipped absolute deviation (SCAD) penalty (Fan and Li, 2001) and the adaptive LASSO penalty (Zou, 2006) have been proposed in literature. In this article, we establish the result that using Bayesian information criterion (BIC) to select the tuning parameter in penalized likelihood estimation with both types of penalties can lead to consistent graphical model selection. We compare the empirical performance of BIC with cross validation method and demonstrate the advantageous performance of BIC criterion for tuning parameter selection through simulation studies.
Non-parametric maximum likelihood estimation encompasses a group of classic methods to estimate distribution-associated functions from potentially censored and truncated data, with extensive applications in survival analysis. These methods, including the Kaplan-Meier estimator and Turnbulls method, often result in overfitting, especially when the sample size is small. We propose an improvement to these methods by applying kernel smoothing to their raw estimates, based on a BIC-type loss function that balances the trade-off between optimizing model fit and controlling model complexity. In the context of a longitudinal study with repeated observations, we detail our proposed smoothing procedure and optimization algorithm. With extensive simulation studies over multiple realistic scenarios, we demonstrate that our smoothing-based procedure provides better overall accuracy in both survival function estimation and individual-level time-to-event prediction by reducing overfitting. Our smoothing procedure decreases the discrepancy between the estimated and true simulated survival function using interval-censored data by up to 49% compared to the raw un-smoothed estimate, with similar improvements of up to 41% and 23% in within-sample and out-of-sample prediction, respectively. Finally, we apply our method to real data on censored breast cancer diagnosis, which similarly shows improvement when compared to empirical survival estimates from uncensored data. We provide an R package, SISE, for implementing our penalized likelihood method.
This article investigates the origin of numerical issues in maximum likelihood parameter estimation for Gaussian process (GP) interpolation and investigates simple but effective strategies for improving commonly used open-source software implementations. This work targets a basic problem but a host of studies, particularly in the literature of Bayesian optimization, rely on off-the-shelf GP implementations. For the conclusions of these studies to be reliable and reproducible, robust GP implementations are critical.
A new robust stochastic volatility (SV) model having Student-t marginals is proposed. Our process is defined through a linear normal regression model driven by a latent gamma process that controls temporal dependence. This gamma process is strategically chosen to enable us to find an explicit expression for the pairwise joint density function of the Student-t response process. With this at hand, we propose a composite likelihood (CL) based inference for our model, which can be straightforwardly implemented with a low computational cost. This is a remarkable feature of our Student-t SV process over existing SV models in the literature that involve computationally heavy algorithms for estimating parameters. Aiming at a precise estimation of the parameters related to the latent process, we propose a CL Expectation-Maximization algorithm and discuss a bootstrap approach to obtain standard errors. The finite-sample performance of our composite likelihood methods is assessed through Monte Carlo simulations. The methodology is motivated by an empirical application in the financial market. We analyze the relationship, across multiple time periods, between various US sector Exchange-Traded Funds returns and individual companies stock price returns based on our novel Student-t model. This relationship is further utilized in selecting optimal financial portfolios.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا