ترغب بنشر مسار تعليمي؟ اضغط هنا

Approximating the Permanent with Deep Rejection Sampling

68   0   0.0 ( 0 )
 نشر من قبل Juha Harviainen
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We present a randomized approximation scheme for the permanent of a matrix with nonnegative entries. Our scheme extends a recursive rejection sampling method of Huber and Law (SODA 2008) by replacing the upper bound for the permanent with a linear combination of the subproblem bounds at a moderately large depth of the recursion tree. This method, we call deep rejection sampling, is empirically shown to outperform the basic, depth-zero variant, as well as a related method by Kuck et al. (NeurIPS 2019). We analyze the expected running time of the scheme on random $(0, 1)$-matrices where each entry is independently $1$ with probability $p$. Our bound is superior to a previous one for $p$ less than $1/5$, matching another bound that was known to hold when every row and column has density exactly $p$.



قيم البحث

اقرأ أيضاً

281 - Lior Eldar , Saeed Mehraban 2017
We show an algorithm for computing the permanent of a random matrix with vanishing mean in quasi-polynomial time. Among special cases are the Gaussian, and biased-Bernoulli random matrices with mean 1/lnln(n)^{1/8}. In addition, we can compute the pe rmanent of a random matrix with mean 1/poly(ln(n)) in time 2^{O(n^{eps})} for any small constant eps>0. Our algorithm counters the intuition that the permanent is hard because of the sign problem - namely the interference between entries of a matrix with different signs. A major open question then remains whether one can provide an efficient algorithm for random matrices of mean 1/poly(n), whose conjectured #P-hardness is one of the baseline assumptions of the BosonSampling paradigm.
Learning latent variable models with stochastic variational inference is challenging when the approximate posterior is far from the true posterior, due to high variance in the gradient estimates. We propose a novel rejection sampling step that discar ds samples from the variational posterior which are assigned low likelihoods by the model. Our approach provides an arbitrarily accurate approximation of the true posterior at the expense of extra computation. Using a new gradient estimator for the resulting unnormalized proposal distribution, we achieve average improvements of 3.71 nats and 0.21 nats over state-of-the-art single-sample and multi-sample alternatives respectively for estimating marginal log-likelihoods using sigmoid belief networks on the MNIST dataset.
61 - Luca Martino 2017
Monte Carlo (MC) methods have become very popular in signal processing during the past decades. The adaptive rejection sampling (ARS) algorithms are well-known MC technique which draw efficiently independent samples from univariate target densities. The ARS schemes yield a sequence of proposal functions that converge toward the target, so that the probability of accepting a sample approaches one. However, sampling from the proposal pdf becomes more computationally demanding each time it is updated. We propose the Parsimonious Adaptive Rejection Sampling (PARS) method, where an efficient trade-off between acceptance rate and proposal complexity is obtained. Thus, the resulting algorithm is faster than the standard ARS approach.
We first show that a better analysis of the algorithm for The Two-Sage Stochastic Facility Location Problem from Srinivasan cite{sri07} and the algorithm for The Robust Fault Tolerant Facility Location Problem from Byrka et al cite{bgs10} can render improved approximation factors of 2.206 and alpha+4 where alpha is the maximum number an adversary can close, respectively, and which are the best ratios so far. We then present new models for the soft-capacitated facility location problem with uncertainty and design constant factor approximation algorithms to solve them. We devise the stochastic and robust approaches to handle the uncertainty incorporated into the original model. Explicitly, in this paper we propose two new problem, named The 2-Stage Soft-Capacitated Facility Location Problem and The Robust Soft-Capacitated Facility Location Problem respectively, and present constant factor approximation algorithms for them both. Our method uses reductions between facility location problems and linear-cost models, the randomized thresholding technique of Srinivasan cite{sri07} and the filtering and clustering technique of Byrka et al cite{bgs10}.
We study approximation algorithms for variants of the emph{median string} problem, which asks for a string that minimizes the sum of edit distances from a given set of $m$ strings of length $n$. Only the straightforward $2$-approximation is known for this NP-hard problem. This problem is motivated e.g.~by computational biology, and belongs to the class of median problems (over different metric spaces), which are fundamental tasks in data analysis. Our main result is for the Ulam metric, where all strings are permutations over $[n]$ and each edit operation moves a symbol (deletion plus insertion). We devise for this problem an algorithms that breaks the $2$-approximation barrier, i.e., computes a $(2-delta)$-approximate median permutation for some constant $delta>0$ in time $tilde{O}(nm^2+n^3)$. We further use these techniques to achieve a $(2-delta)$ approximation for the median string problem in the special case where the median is restricted to length $n$ and the optimal objective is large $Omega(mn)$. We also design an approximation algorithm for the following probabilistic model of the Ulam median: the input consists of $m$ perturbations of an (unknown) permutation $x$, each generated by moving every symbol to a random position with probability (a parameter) $epsilon>0$. Our algorithm computes with high probability a $(1+o(1/epsilon))$-approximate median permutation in time $O(mn^2+n^3)$.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا