Do you want to publish a course? Click here

Rejection and Importance Sampling based Perfect Simulation for Gibbs hard-sphere models

78   0   0.0 ( 0 )
 Added by Sarat Babu Moka
 Publication date 2017
  fields
and research's language is English




Ask ChatGPT about the research

Coupling from the past (CFTP) methods have been used to generate perfect samples from finite Gibbs hard-sphere models, an important class of spatial point processes, which is a set of spheres with the centers on a bounded region that are distributed as a homogeneous Poisson point process (PPP) conditioned that spheres do not overlap with each other. We propose an alternative importance sampling based rejection methodology for the perfect sampling of these models. We analyze the asymptotic expected running time complexity of the proposed method when the intensity of the reference PPP increases to infinity while the (expected) sphere radius decreases to zero at varying rates. We further compare the performance of the proposed method analytically and numerically with a naive rejection algorithm and popular dominated CFTP algorithms. Our analysis relies upon identifying large deviations decay rates of the non-overlapping probability of spheres whose centers are distributed as a homogeneous PPP.



rate research

Read More

This paper makes three contributions to estimating the number of perfect matching in bipartite graphs. First, we prove that the popular sequential importance sampling algorithm works in polynomial time for dense bipartite graphs. More carefully, our algorithm gives a $(1-epsilon)$-approximation for the number of perfect matchings of a $lambda$-dense bipartite graph, using $O(n^{frac{1-2lambda}{8lambda}+epsilon^{-2}})$ samples. With size $n$ on each side and for $frac{1}{2}>lambda>0$, a $lambda$-dense bipartite graph has all degrees greater than $(lambda+frac{1}{2})n$. Second, practical applications of the algorithm requires many calls to matching algorithms. A novel preprocessing step is provided which makes significant improvements. Third, three applications are provided. The first is for counting Latin squares, the second is a practical way of computing the greedy algorithm for a card guessing game with feedback, and the third is for stochastic block models. In all three examples, sequential importance sampling allows treating practical problems of reasonably large sizes.
We consider Particle Gibbs (PG) as a tool for Bayesian analysis of non-linear non-Gaussian state-space models. PG is a Monte Carlo (MC) approximation of the standard Gibbs procedure which uses sequential MC (SMC) importance sampling inside the Gibbs procedure to update the latent and potentially high-dimensional state trajectories. We propose to combine PG with a generic and easily implementable SMC approach known as Particle Efficient Importance Sampling (PEIS). By using SMC importance sampling densities which are approximately fully globally adapted to the targeted density of the states, PEIS can substantially improve the mixing and the efficiency of the PG draws from the posterior of the states and the parameters relative to existing PG implementations. The efficiency gains achieved by PEIS are illustrated in PG applications to a univariate stochastic volatility model for asset returns, a non-Gaussian nonlinear local-level model for interest rates, and a multivariate stochastic volatility model for the realized covariance matrix of asset returns.
We construct asymptotic arguments for the relative efficiency of rejection-free Monte Carlo (MC) methods compared to the standard MC method. We find that the efficiency is proportional to $exp{({const} beta)}$ in the Ising, $sqrt{beta}$ in the classical XY, and $beta$ in the classical Heisenberg spin systems with inverse temperature $beta$, regardless of the dimension. The efficiency in hard particle systems is also obtained, and found to be proportional to $(rhoc -rho)^{-d}$ with the closest packing density $rhoc$, density $rho$, and dimension $d$ of the systems. We construct and implement a rejection-free Monte Carlo method for the hard-disk system. The RFMC has a greater computational efficiency at high densities, and the density dependence of the efficiency is as predicted by our arguments.
Importance sampling is a technique that is commonly used to speed up Monte Carlo simulation of rare events. However, little is known regarding the design of efficient importance sampling algorithms in the context of queueing networks. The standard approach, which simulates the system using an a priori fixed change of measure suggested by large deviation analysis, has been shown to fail in even the simplest network setting (e.g., a two-node tandem network). Exploiting connections between importance sampling, differential games, and classical subsolutions of the corresponding Isaacs equation, we show how to design and analyze simple and efficient dynamic importance sampling schemes for general classes of networks. The models used to illustrate the approach include $d$-node tandem Jackson networks and a two-node network with feedback, and the rare events studied are those of large queueing backlogs, including total population overflow and the overflow of individual buffers.
We consider the distributional fixed-point equation: $$R stackrel{mathcal{D}}{=} Q vee left( bigvee_{i=1}^N C_i R_i right),$$ where the ${R_i}$ are i.i.d.~copies of $R$, independent of the vector $(Q, N, {C_i})$, where $N in mathbb{N}$, $Q, {C_i} geq 0$ and $P(Q > 0) > 0$. By setting $W = log R$, $X_i = log C_i$, $Y = log Q$ it is equivalent to the high-order Lindley equation $$W stackrel{mathcal{D}}{=} maxleft{ Y, , max_{1 leq i leq N} (X_i + W_i) right}.$$ It is known that under Kesten assumptions, $$P(W > t) sim H e^{-alpha t}, qquad t to infty,$$ where $alpha>0$ solves the Cramer-Lundberg equation $E left[ sum_{j=1}^N C_i ^alpha right] = Eleft[ sum_{i=1}^N e^{alpha X_i} right] = 1$. The main goal of this paper is to provide an explicit representation for $P(W > t)$, which can be directly connected to the underlying weighted branching process where $W$ is constructed and that can be used to construct unbiased and strongly efficient estimators for all $t$. Furthermore, we show how this new representation can be directly analyzed using Alsmeyers Markov renewal theorem, yielding an alternative representation for the constant $H$. We provide numerical examples illustrating the use of this new algorithm.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا