Do you want to publish a course? Click here

Robust adaptive importance sampling for normal random vectors

301   0   0.0 ( 0 )
 Added by Jerome Lelong
 Publication date 2009
  fields
and research's language is English




Ask ChatGPT about the research

Adaptive Monte Carlo methods are very efficient techniques designed to tune simulation estimators on-line. In this work, we present an alternative to stochastic approximation to tune the optimal change of measure in the context of importance sampling for normal random vectors. Unlike stochastic approximation, which requires very fine tuning in practice, we propose to use sample average approximation and deterministic optimization techniques to devise a robust and fully automatic variance reduction methodology. The same samples are used in the sample optimization of the importance sampling parameter and in the Monte Carlo computation of the expectation of interest with the optimal measure computed in the previous step. We prove that this highly dependent Monte Carlo estimator is convergent and satisfies a central limit theorem with the optimal limiting variance. Numerical experiments confirm the performance of this estimator: in comparison with the crude Monte Carlo method, the computation time needed to achieve a given precision is divided by a factor between 3 and 15.



rate research

Read More

The Adaptive Multiple Importance Sampling (AMIS) algorithm is aimed at an optimal recycling of past simulations in an iterated importance sampling scheme. The difference with earlier adaptive importance sampling implementations like Population Monte Carlo is that the importance weights of all simulated values, past as well as present, are recomputed at each iteration, following the technique of the deterministic multiple mixture estimator of Owen and Zhou (2000). Although the convergence properties of the algorithm cannot be fully investigated, we demonstrate through a challenging banana shape target distribution and a population genetics example that the improvement brought by this technique is substantial.
241 - L. Martino , V. Elvira , D. Luengo 2015
Monte Carlo methods represent the de facto standard for approximating complicated integrals involving multidimensional target distributions. In order to generate random realizations from the target distribution, Monte Carlo techniques use simpler proposal probability densities to draw candidate samples. The performance of any such method is strictly related to the specification of the proposal distribution, such that unfortunate choices easily wreak havoc on the resulting estimators. In this work, we introduce a layered (i.e., hierarchical) procedure to generate samples employed within a Monte Carlo scheme. This approach ensures that an appropriate equivalent proposal density is always obtained automatically (thus eliminating the risk of a catastrophic performance), although at the expense of a moderate increase in the complexity. Furthermore, we provide a general unified importance sampling (IS) framework, where multiple proposal densities are employed and several IS schemes are introduced by applying the so-called deterministic mixture approach. Finally, given these schemes, we also propose a novel class of adaptive importance samplers using a population of proposals, where the adaptation is driven by independent parallel or interacting Markov Chain Monte Carlo (MCMC) chains. The resulting algorithms efficiently combine the benefits of both IS and MCMC methods.
We consider a generalization of the discrete-time Self Healing Umbrella Sampling method, which is an adaptive importance technique useful to sample multimodal target distributions. The importance function is based on the weights (namely the relative probabilities) of disjoint sets which form a partition of the space. These weights are unknown but are learnt on the fly yielding an adaptive algorithm. In the context of computational statistical physics, the logarithm of these weights is, up to a multiplicative constant, the free energy, and the discrete valued function defining the partition is called the collective variable. The algorithm falls into the general class of Wang-Landau type methods, and is a generalization of the original Self Healing Umbrella Sampling method in two ways: (i) the updating strategy leads to a larger penalization strength of already visited sets in order to escape more quickly from metastable states, and (ii) the target distribution is biased using only a fraction of the free energy, in order to increase the effective sample size and reduce the variance of importance sampling estimators. The algorithm can also be seen as a generalization of well-tempered metadynamics. We prove the convergence of the algorithm and analyze numerically its efficiency on a toy example.
Importance sampling is a technique that is commonly used to speed up Monte Carlo simulation of rare events. However, little is known regarding the design of efficient importance sampling algorithms in the context of queueing networks. The standard approach, which simulates the system using an a priori fixed change of measure suggested by large deviation analysis, has been shown to fail in even the simplest network setting (e.g., a two-node tandem network). Exploiting connections between importance sampling, differential games, and classical subsolutions of the corresponding Isaacs equation, we show how to design and analyze simple and efficient dynamic importance sampling schemes for general classes of networks. The models used to illustrate the approach include $d$-node tandem Jackson networks and a two-node network with feedback, and the rare events studied are those of large queueing backlogs, including total population overflow and the overflow of individual buffers.
We consider the distributional fixed-point equation: $$R stackrel{mathcal{D}}{=} Q vee left( bigvee_{i=1}^N C_i R_i right),$$ where the ${R_i}$ are i.i.d.~copies of $R$, independent of the vector $(Q, N, {C_i})$, where $N in mathbb{N}$, $Q, {C_i} geq 0$ and $P(Q > 0) > 0$. By setting $W = log R$, $X_i = log C_i$, $Y = log Q$ it is equivalent to the high-order Lindley equation $$W stackrel{mathcal{D}}{=} maxleft{ Y, , max_{1 leq i leq N} (X_i + W_i) right}.$$ It is known that under Kesten assumptions, $$P(W > t) sim H e^{-alpha t}, qquad t to infty,$$ where $alpha>0$ solves the Cramer-Lundberg equation $E left[ sum_{j=1}^N C_i ^alpha right] = Eleft[ sum_{i=1}^N e^{alpha X_i} right] = 1$. The main goal of this paper is to provide an explicit representation for $P(W > t)$, which can be directly connected to the underlying weighted branching process where $W$ is constructed and that can be used to construct unbiased and strongly efficient estimators for all $t$. Furthermore, we show how this new representation can be directly analyzed using Alsmeyers Markov renewal theorem, yielding an alternative representation for the constant $H$. We provide numerical examples illustrating the use of this new algorithm.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا