ترغب بنشر مسار تعليمي؟ اضغط هنا

Sequential importance sampling for estimating expectations over the space of perfect matchings

168   0   0.0 ( 0 )
 نشر من قبل Yeganeh Alimohammadi
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

This paper makes three contributions to estimating the number of perfect matching in bipartite graphs. First, we prove that the popular sequential importance sampling algorithm works in polynomial time for dense bipartite graphs. More carefully, our algorithm gives a $(1-epsilon)$-approximation for the number of perfect matchings of a $lambda$-dense bipartite graph, using $O(n^{frac{1-2lambda}{8lambda}+epsilon^{-2}})$ samples. With size $n$ on each side and for $frac{1}{2}>lambda>0$, a $lambda$-dense bipartite graph has all degrees greater than $(lambda+frac{1}{2})n$. Second, practical applications of the algorithm requires many calls to matching algorithms. A novel preprocessing step is provided which makes significant improvements. Third, three applications are provided. The first is for counting Latin squares, the second is a practical way of computing the greedy algorithm for a card guessing game with feedback, and the third is for stochastic block models. In all three examples, sequential importance sampling allows treating practical problems of reasonably large sizes.

قيم البحث

اقرأ أيضاً

Coupling from the past (CFTP) methods have been used to generate perfect samples from finite Gibbs hard-sphere models, an important class of spatial point processes, which is a set of spheres with the centers on a bounded region that are distributed as a homogeneous Poisson point process (PPP) conditioned that spheres do not overlap with each other. We propose an alternative importance sampling based rejection methodology for the perfect sampling of these models. We analyze the asymptotic expected running time complexity of the proposed method when the intensity of the reference PPP increases to infinity while the (expected) sphere radius decreases to zero at varying rates. We further compare the performance of the proposed method analytically and numerically with a naive rejection algorithm and popular dominated CFTP algorithms. Our analysis relies upon identifying large deviations decay rates of the non-overlapping probability of spheres whose centers are distributed as a homogeneous PPP.
Importance sampling is a technique that is commonly used to speed up Monte Carlo simulation of rare events. However, little is known regarding the design of efficient importance sampling algorithms in the context of queueing networks. The standard ap proach, which simulates the system using an a priori fixed change of measure suggested by large deviation analysis, has been shown to fail in even the simplest network setting (e.g., a two-node tandem network). Exploiting connections between importance sampling, differential games, and classical subsolutions of the corresponding Isaacs equation, we show how to design and analyze simple and efficient dynamic importance sampling schemes for general classes of networks. The models used to illustrate the approach include $d$-node tandem Jackson networks and a two-node network with feedback, and the rare events studied are those of large queueing backlogs, including total population overflow and the overflow of individual buffers.
We consider the distributional fixed-point equation: $$R stackrel{mathcal{D}}{=} Q vee left( bigvee_{i=1}^N C_i R_i right),$$ where the ${R_i}$ are i.i.d.~copies of $R$, independent of the vector $(Q, N, {C_i})$, where $N in mathbb{N}$, $Q, {C_i} geq 0$ and $P(Q > 0) > 0$. By setting $W = log R$, $X_i = log C_i$, $Y = log Q$ it is equivalent to the high-order Lindley equation $$W stackrel{mathcal{D}}{=} maxleft{ Y, , max_{1 leq i leq N} (X_i + W_i) right}.$$ It is known that under Kesten assumptions, $$P(W > t) sim H e^{-alpha t}, qquad t to infty,$$ where $alpha>0$ solves the Cramer-Lundberg equation $E left[ sum_{j=1}^N C_i ^alpha right] = Eleft[ sum_{i=1}^N e^{alpha X_i} right] = 1$. The main goal of this paper is to provide an explicit representation for $P(W > t)$, which can be directly connected to the underlying weighted branching process where $W$ is constructed and that can be used to construct unbiased and strongly efficient estimators for all $t$. Furthermore, we show how this new representation can be directly analyzed using Alsmeyers Markov renewal theorem, yielding an alternative representation for the constant $H$. We provide numerical examples illustrating the use of this new algorithm.
52 - Zhongyang Li 2020
We consider random perfect matchings on a general class of contracting bipartite graphs by letting certain edge weights be 0 on the contracting square-hexagon lattice in a periodic way. We obtain a deterministic limit shape in the scaling limit. The results can also be applied to prove the existence of multiple disconnected liquid regions for all the contracting square-hexagon lattices with certain edge weights, extending the results proved in [13] for contracting square-hexagon lattices where the number of square rows in each period is either 0 or 1.
In this paper we further investigate the well-studied problem of finding a perfect matching in a regular bipartite graph. The first non-trivial algorithm, with running time $O(mn)$, dates back to K{o}nigs work in 1916 (here $m=nd$ is the number of ed ges in the graph, $2n$ is the number of vertices, and $d$ is the degree of each node). The currently most efficient algorithm takes time $O(m)$, and is due to Cole, Ost, and Schirra. We improve this running time to $O(min{m, frac{n^{2.5}ln n}{d}})$; this minimum can never be larger than $O(n^{1.75}sqrt{ln n})$. We obtain this improvement by proving a uniform sampling theorem: if we sample each edge in a $d$-regular bipartite graph independently with a probability $p = O(frac{nln n}{d^2})$ then the resulting graph has a perfect matching with high probability. The proof involves a decomposition of the graph into pieces which are guaranteed to have many perfect matchings but do not have any small cuts. We then establish a correspondence between potential witnesses to non-existence of a matching (after sampling) in any piece and cuts of comparable size in that same piece. Kargers sampling theorem for preserving cuts in a graph can now be adapted to prove our uniform sampling theorem for preserving perfect matchings. Using the $O(msqrt{n})$ algorithm (due to Hopcroft and Karp) for finding maximum matchings in bipartite graphs on the sampled graph then yields the stated running time. We also provide an infinite family of instances to show that our uniform sampling result is tight up to poly-logarithmic factors (in fact, up to $ln^2 n$).
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا