Do you want to publish a course? Click here

Importance sampling for maxima on trees

86   0   0.0 ( 0 )
 Added by Michael Conroy
 Publication date 2020
  fields
and research's language is English




Ask ChatGPT about the research

We consider the distributional fixed-point equation: $$R stackrel{mathcal{D}}{=} Q vee left( bigvee_{i=1}^N C_i R_i right),$$ where the ${R_i}$ are i.i.d.~copies of $R$, independent of the vector $(Q, N, {C_i})$, where $N in mathbb{N}$, $Q, {C_i} geq 0$ and $P(Q > 0) > 0$. By setting $W = log R$, $X_i = log C_i$, $Y = log Q$ it is equivalent to the high-order Lindley equation $$W stackrel{mathcal{D}}{=} maxleft{ Y, , max_{1 leq i leq N} (X_i + W_i) right}.$$ It is known that under Kesten assumptions, $$P(W > t) sim H e^{-alpha t}, qquad t to infty,$$ where $alpha>0$ solves the Cramer-Lundberg equation $E left[ sum_{j=1}^N C_i ^alpha right] = Eleft[ sum_{i=1}^N e^{alpha X_i} right] = 1$. The main goal of this paper is to provide an explicit representation for $P(W > t)$, which can be directly connected to the underlying weighted branching process where $W$ is constructed and that can be used to construct unbiased and strongly efficient estimators for all $t$. Furthermore, we show how this new representation can be directly analyzed using Alsmeyers Markov renewal theorem, yielding an alternative representation for the constant $H$. We provide numerical examples illustrating the use of this new algorithm.



rate research

Read More

Importance sampling is a technique that is commonly used to speed up Monte Carlo simulation of rare events. However, little is known regarding the design of efficient importance sampling algorithms in the context of queueing networks. The standard approach, which simulates the system using an a priori fixed change of measure suggested by large deviation analysis, has been shown to fail in even the simplest network setting (e.g., a two-node tandem network). Exploiting connections between importance sampling, differential games, and classical subsolutions of the corresponding Isaacs equation, we show how to design and analyze simple and efficient dynamic importance sampling schemes for general classes of networks. The models used to illustrate the approach include $d$-node tandem Jackson networks and a two-node network with feedback, and the rare events studied are those of large queueing backlogs, including total population overflow and the overflow of individual buffers.
Adaptive Monte Carlo methods are very efficient techniques designed to tune simulation estimators on-line. In this work, we present an alternative to stochastic approximation to tune the optimal change of measure in the context of importance sampling for normal random vectors. Unlike stochastic approximation, which requires very fine tuning in practice, we propose to use sample average approximation and deterministic optimization techniques to devise a robust and fully automatic variance reduction methodology. The same samples are used in the sample optimization of the importance sampling parameter and in the Monte Carlo computation of the expectation of interest with the optimal measure computed in the previous step. We prove that this highly dependent Monte Carlo estimator is convergent and satisfies a central limit theorem with the optimal limiting variance. Numerical experiments confirm the performance of this estimator: in comparison with the crude Monte Carlo method, the computation time needed to achieve a given precision is divided by a factor between 3 and 15.
Coupling from the past (CFTP) methods have been used to generate perfect samples from finite Gibbs hard-sphere models, an important class of spatial point processes, which is a set of spheres with the centers on a bounded region that are distributed as a homogeneous Poisson point process (PPP) conditioned that spheres do not overlap with each other. We propose an alternative importance sampling based rejection methodology for the perfect sampling of these models. We analyze the asymptotic expected running time complexity of the proposed method when the intensity of the reference PPP increases to infinity while the (expected) sphere radius decreases to zero at varying rates. We further compare the performance of the proposed method analytically and numerically with a naive rejection algorithm and popular dominated CFTP algorithms. Our analysis relies upon identifying large deviations decay rates of the non-overlapping probability of spheres whose centers are distributed as a homogeneous PPP.
This paper makes three contributions to estimating the number of perfect matching in bipartite graphs. First, we prove that the popular sequential importance sampling algorithm works in polynomial time for dense bipartite graphs. More carefully, our algorithm gives a $(1-epsilon)$-approximation for the number of perfect matchings of a $lambda$-dense bipartite graph, using $O(n^{frac{1-2lambda}{8lambda}+epsilon^{-2}})$ samples. With size $n$ on each side and for $frac{1}{2}>lambda>0$, a $lambda$-dense bipartite graph has all degrees greater than $(lambda+frac{1}{2})n$. Second, practical applications of the algorithm requires many calls to matching algorithms. A novel preprocessing step is provided which makes significant improvements. Third, three applications are provided. The first is for counting Latin squares, the second is a practical way of computing the greedy algorithm for a card guessing game with feedback, and the third is for stochastic block models. In all three examples, sequential importance sampling allows treating practical problems of reasonably large sizes.
We consider a generalization of the discrete-time Self Healing Umbrella Sampling method, which is an adaptive importance technique useful to sample multimodal target distributions. The importance function is based on the weights (namely the relative probabilities) of disjoint sets which form a partition of the space. These weights are unknown but are learnt on the fly yielding an adaptive algorithm. In the context of computational statistical physics, the logarithm of these weights is, up to a multiplicative constant, the free energy, and the discrete valued function defining the partition is called the collective variable. The algorithm falls into the general class of Wang-Landau type methods, and is a generalization of the original Self Healing Umbrella Sampling method in two ways: (i) the updating strategy leads to a larger penalization strength of already visited sets in order to escape more quickly from metastable states, and (ii) the target distribution is biased using only a fraction of the free energy, in order to increase the effective sample size and reduce the variance of importance sampling estimators. The algorithm can also be seen as a generalization of well-tempered metadynamics. We prove the convergence of the algorithm and analyze numerically its efficiency on a toy example.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا