ترغب بنشر مسار تعليمي؟ اضغط هنا

DNF Sparsification and a Faster Deterministic Counting Algorithm

78   0   0.0 ( 0 )
 نشر من قبل Parikshit Gopalan
 تاريخ النشر 2012
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Given a DNF formula on n variables, the two natural size measures are the number of terms or size s(f), and the maximum width of a term w(f). It is folklore that short DNF formulas can be made narrow. We prove a converse, showing that narrow formulas can be sparsified. More precisely, any width w DNF irrespective of its size can be $epsilon$-approximated by a width $w$ DNF with at most $(wlog(1/epsilon))^{O(w)}$ terms. We combine our sparsification result with the work of Luby and Velikovic to give a faster deterministic algorithm for approximately counting the number of satisfying solutions to a DNF. Given a formula on n variables with poly(n) terms, we give a deterministic $n^{tilde{O}(log log(n))}$ time algorithm that computes an additive $epsilon$ approximation to the fraction of satisfying assignments of f for $epsilon = 1/poly(log n)$. The previous best result due to Luby and Velickovic from nearly two decades ago had a run-time of $n^{exp(O(sqrt{log log n}))}$.



قيم البحث

اقرأ أيضاً

A pseudo-deterministic algorithm is a (randomized) algorithm which, when run multiple times on the same input, with high probability outputs the same result on all executions. Classic streaming algorithms, such as those for finding heavy hitters, app roximate counting, $ell_2$ approximation, finding a nonzero entry in a vector (for turnstile algorithms) are not pseudo-deterministic. For example, in the instance of finding a nonzero entry in a vector, for any known low-space algorithm $A$, there exists a stream $x$ so that running $A$ twice on $x$ (using different randomness) would with high probability result in two different entries as the output. In this work, we study whether it is inherent that these algorithms output different values on different executions. That is, we ask whether these problems have low-memory pseudo-deterministic algorithms. For instance, we show that there is no low-memory pseudo-deterministic algorithm for finding a nonzero entry in a vector (given in a turnstile fashion), and also that there is no low-dimensional pseudo-deterministic sketching algorithm for $ell_2$ norm estimation. We also exhibit problems which do have low memory pseudo-deterministic algorithms but no low memory deterministic algorithm, such as outputting a nonzero row of a matrix, or outputting a basis for the row-span of a matrix. We also investigate multi-pseudo-deterministic algorithms: algorithms which with high probability output one of a few options. We show the first lower bounds for such algorithms. This implies that there are streaming problems such that every low space algorithm for the problem must have inputs where there are many valid outputs, all with a significant probability of being outputted.
In this paper we show a deterministic parallel all-pairs shortest paths algorithm for real-weighted directed graphs. The algorithm has $tilde{O}(nm+(n/d)^3)$ work and $tilde{O}(d)$ depth for any depth parameter $din [1,n]$. To the best of our knowled ge, such a trade-off has only been previously described for the real-weighted single-source shortest paths problem using randomization [Bringmann et al., ICALP17]. Moreover, our result improves upon the parallelism of the state-of-the-art randomized parallel algorithm for computing transitive closure, which has $tilde{O}(nm+n^3/d^2)$ work and $tilde{O}(d)$ depth [Ullman and Yannakakis, SIAM J. Comput. 91]. Our APSP algorithm turns out to be a powerful tool for designing efficient planar graph algorithms in both parallel and sequential regimes. One notable ingredient of our parallel APSP algorithm is a simple deterministic $tilde{O}(nm)$-work $tilde{O}(d)$-depth procedure for computing $tilde{O}(n/d)$-size hitting sets of shortest $d$-hop paths between all pairs of vertices of a real-weighted digraph. Such hitting sets have also been called $d$-hub sets. Hub sets have previously proved especially useful in designing parallel or dynamic shortest paths algorithms and are typically obtained via random sampling. Our procedure implies, for example, an $tilde{O}(nm)$-time deterministic algorithm for finding a shortest negative cycle of a real-weighted digraph. Such a near-optimal bound for this problem has been so far only achieved using a randomized algorithm [Orlin et al., Discret. Appl. Math. 18].
We provide a graphical treatment of SAT and #SAT on equal footing. Instances of #SAT can be represented as tensor networks in a standard way. These tensor networks are interpreted by diagrams of the ZH-calculus: a system to reason about tensors over C in terms of diagrams built from simple generators, in which computation may be carried out by transformations of diagrams alone. In general, nodes of ZH diagrams take parameters over C which determine the tensor coefficients; for the standard representation of #SAT instances, the coefficients take the value 0 or 1. Then, by choosing the coefficients of a diagram to range over B, we represent the corresponding instance of SAT. Thus, by interpreting a diagram either over the boolean semiring or the complex numbers, we instantiate either the decision or counting version of the problem. We find that for classes known to be in P, such as 2SAT and #XORSAT, the existence of appropriate rewrite rules allows for efficient simplification of the diagram, producing the solution in polynomial time. In contrast, for classes known to be NP-complete, such as 3SAT, or #P-complete, such as #2SAT, the corresponding rewrite rules introduce hyperedges to the diagrams, in numbers which are not easily bounded above by a polynomial. This diagrammatic approach unifies the diagnosis of the complexity of CSPs and #CSPs and shows promise in aiding tensor network contraction-based algorithms.
In this work we relate the deterministic complexity of factoring polynomials (over finite fields) to certain combinatorial objects we call m-schemes. We extend the known conditional deterministic subexponential time polynomial factoring algorithm for finite fields to get an underlying m-scheme. We demonstrate how the properties of m-schemes relate to improvements in the deterministic complexity of factoring polynomials over finite fields assuming the generalized Riemann Hypothesis (GRH). In particular, we give the first deterministic polynomial time algorithm (assuming GRH) to find a nontrivial factor of a polynomial of prime degree n where (n-1) is a smooth number.
The notion of semi-random sources, also known as Santha-Vazirani (SV) sources, stands for a sequence of n bits, where the dependence of the ith bit on the previous i-1 bits is limited for every $iin[n]$. If the dependence of the ith bit on the remain ing n-1 bits is limited, then this is a strong SV-source. Even the strong SV-sources are known not to admit (universal) deterministic extractors, but they have seeded extractors, as their min-entropy is $Omega(n)$. It is intuitively obvious that strong SV-sources are more than just high-min-entropy sources, and this work explores the intuition. Deterministic condensers are known not to exist for general high-min-entropy sources, and we construct for any constants $epsilon, delta in (0,1)$ a deterministic condenser that maps n bits coming from a strong SV-source with bias at most $delta$ to $Omega(n)$ bits of min-entropy rate at least $1-epsilon$. In conclusion we observe that deterministic condensers are closely related to very strong extractors - a proposed strengthening of the notion of strong (seeded) extractors: in particular, our constructions can be viewed as very strong extractors for the family of strong Santha-Vazirani distributions. The notion of very strong extractors requires that the output remains unpredictable even to someone who knows not only the seed value (as in the case of strong extractors), but also the extractors outputs corresponding to the same input value with each of the preceding seed values (say, under the lexicographic ordering). Very strong extractors closely resemble the original notion of SV-sources, except that the bits must satisfy the unpredictability requirement only on average.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا