Do you want to publish a course? Click here

Rare-event Probability Estimation via Empirical Likelihood Maximization

126   0   0.0 ( 0 )
 Added by Zdravko Botev
 Publication date 2013
and research's language is English




Ask ChatGPT about the research

We explore past and recent developments in rare-event probability estimation with a particular focus on a novel Monte Carlo technique Empirical Likelihood Maximization (ELM). This is a versatile method that involves sampling from a sequence of densities using MCMC and maximizing an empirical likelihood. The quantity of interest, the probability of a given rare-event, is estimated by solving a convex optimization program related to likelihood maximization. Numerical experiments are performed using this new technique and benchmarks are given against existing robust algorithms and estimators.



rate research

Read More

222 - K. L. Mengersen 2012
Approximate Bayesian computation (ABC) has become an essential tool for the analysis of complex stochastic models when the likelihood function is numerically unavailable. However, the well-established statistical method of empirical likelihood provides another route to such settings that bypasses simulations from the model and the choices of the ABC parameters (summary statistics, distance, tolerance), while being convergent in the number of observations. Furthermore, bypassing model simulations may lead to significant time savings in complex models, for instance those found in population genetics. The BCel algorithm we develop in this paper also provides an evaluation of its own performance through an associated effective sample size. The method is illustrated using several examples, including estimation of standard distributions, time series, and population genetics models.
The Cross Entropy method is a well-known adaptive importance sampling method for rare-event probability estimation, which requires estimating an optimal importance sampling density within a parametric class. In this article we estimate an optimal importance sampling density within a wider semiparametric class of distributions. We show that this semiparametric version of the Cross Entropy method frequently yields efficient estimators. We illustrate the excellent practical performance of the method with numerical experiments and show that for the problems we consider it typically outperforms alternative schemes by orders of magnitude.
The efficient calculation of rare-event kinetics in complex dynamical systems, such as the rate and pathways of ligand dissociation from a protein, is a generally unsolved problem. Markov state models can systematically integrate ensembles of short simulations and thus effectively parallelize the computational effort, but the rare events of interest still need to be spontaneously sampled in the data. Enhanced sampling approaches, such as parallel tempering or umbrella sampling, can accelerate the computation of equilibrium expectations massively - but sacrifice the ability to compute dynamical expectations. In this work we establish a principle to combine knowledge of the equilibrium distribution with kinetics from fast downhill relaxation trajectories using reversible Markov models. This approach is general as it does not invoke any specific dynamical model, and can provide accurate estimates of the rare event kinetics. Large gains in sampling efficiency can be achieved whenever one direction of the process occurs more rapid than its reverse, making the approach especially attractive for downhill processes such as folding and binding in biomolecules.
151 - Xin Gao , Helene Massam 2012
In this article, we discuss the composite likelihood estimation of sparse Gaussian graphical models. When there are symmetry constraints on the concentration matrix or partial correlation matrix, the likelihood estimation can be computational intensive. The composite likelihood offers an alternative formulation of the objective function and yields consistent estimators. When a sparse model is considered, the penalized composite likelihood estimation can yield estimates satisfying both the symmetry and sparsity constraints and possess ORACLE property. Application of the proposed method is demonstrated through simulation studies and a network analysis of a biological data set.
Mixture models are regularly used in density estimation applications, but the problem of estimating the mixing distribution remains a challenge. Nonparametric maximum likelihood produce estimates of the mixing distribution that are discrete, and these may be hard to interpret when the true mixing distribution is believed to have a smooth density. In this paper, we investigate an algorithm that produces a sequence of smooth estimates that has been conjectured to converge to the nonparametric maximum likelihood estimator. Here we give a rigorous proof of this conjecture, and propose a new data-driven stopping rule that produces smooth near-maximum likelihood estimates of the mixing density, and simulations demonstrate the quality empirical performance of this estimator.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا