ﻻ يوجد ملخص باللغة العربية
We present a randomized approximation scheme for the permanent of a matrix with nonnegative entries. Our scheme extends a recursive rejection sampling method of Huber and Law (SODA 2008) by replacing the upper bound for the permanent with a linear combination of the subproblem bounds at a moderately large depth of the recursion tree. This method, we call deep rejection sampling, is empirically shown to outperform the basic, depth-zero variant, as well as a related method by Kuck et al. (NeurIPS 2019). We analyze the expected running time of the scheme on random $(0, 1)$-matrices where each entry is independently $1$ with probability $p$. Our bound is superior to a previous one for $p$ less than $1/5$, matching another bound that was known to hold when every row and column has density exactly $p$.
We show an algorithm for computing the permanent of a random matrix with vanishing mean in quasi-polynomial time. Among special cases are the Gaussian, and biased-Bernoulli random matrices with mean 1/lnln(n)^{1/8}. In addition, we can compute the pe
Learning latent variable models with stochastic variational inference is challenging when the approximate posterior is far from the true posterior, due to high variance in the gradient estimates. We propose a novel rejection sampling step that discar
Monte Carlo (MC) methods have become very popular in signal processing during the past decades. The adaptive rejection sampling (ARS) algorithms are well-known MC technique which draw efficiently independent samples from univariate target densities.
We first show that a better analysis of the algorithm for The Two-Sage Stochastic Facility Location Problem from Srinivasan cite{sri07} and the algorithm for The Robust Fault Tolerant Facility Location Problem from Byrka et al cite{bgs10} can render
We study approximation algorithms for variants of the emph{median string} problem, which asks for a string that minimizes the sum of edit distances from a given set of $m$ strings of length $n$. Only the straightforward $2$-approximation is known for