Do you want to publish a course? Click here

Pseudo-deterministic Streaming

100   0   0.0 ( 0 )
 Added by Sidhanth Mohanty
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

A pseudo-deterministic algorithm is a (randomized) algorithm which, when run multiple times on the same input, with high probability outputs the same result on all executions. Classic streaming algorithms, such as those for finding heavy hitters, approximate counting, $ell_2$ approximation, finding a nonzero entry in a vector (for turnstile algorithms) are not pseudo-deterministic. For example, in the instance of finding a nonzero entry in a vector, for any known low-space algorithm $A$, there exists a stream $x$ so that running $A$ twice on $x$ (using different randomness) would with high probability result in two different entries as the output. In this work, we study whether it is inherent that these algorithms output different values on different executions. That is, we ask whether these problems have low-memory pseudo-deterministic algorithms. For instance, we show that there is no low-memory pseudo-deterministic algorithm for finding a nonzero entry in a vector (given in a turnstile fashion), and also that there is no low-dimensional pseudo-deterministic sketching algorithm for $ell_2$ norm estimation. We also exhibit problems which do have low memory pseudo-deterministic algorithms but no low memory deterministic algorithm, such as outputting a nonzero row of a matrix, or outputting a basis for the row-span of a matrix. We also investigate multi-pseudo-deterministic algorithms: algorithms which with high probability output one of a few options. We show the first lower bounds for such algorithms. This implies that there are streaming problems such that every low space algorithm for the problem must have inputs where there are many valid outputs, all with a significant probability of being outputted.



rate research

Read More

We study the problem of approximating the value of a Unique Game instance in the streaming model. A simple count of the number of constraints divided by $p$, the alphabet size of the Unique Game, gives a trivial $p$-approximation that can be computed in $O(log n)$ space. Meanwhile, with high probability, a sample of $tilde{O}(n)$ constraints suffices to estimate the optimal value to $(1+epsilon)$ accuracy. We prove that any single-pass streaming algorithm that achieves a $(p-epsilon)$-approximation requires $Omega_epsilon(sqrt{n})$ space. Our proof is via a reduction from lower bounds for a communication problem that is a $p$-ary variant of the Boolean Hidden Matching problem studied in the literature. Given the utility of Unique Games as a starting point for reduction to other optimization problems, our strong hardness for approximating Unique Games could lead to downemph{stream} hardness results for streaming approximability for other CSP-like problems.
We consider the approximability of constraint satisfaction problems in the streaming setting. For every constraint satisfaction problem (CSP) on $n$ variables taking values in ${0,ldots,q-1}$, we prove that improving over the trivial approximability by a factor of $q$ requires $Omega(n)$ space even on instances with $O(n)$ constraints. We also identify a broad subclass of problems for which any improvement over the trivial approximability requires $Omega(n)$ space. The key technical core is an optimal, $q^{-(k-1)}$-inapproximability for the case where every constraint is given by a system of $k-1$ linear equations $bmod; q$ over $k$ variables. Prior to our work, no such hardness was known for an approximation factor less than $1/2$ for any CSP. Our work builds on and extends the work of Kapralov and Krachun (Proc. STOC 2019) who showed a linear lower bound on any non-trivial approximation of the max cut in graphs. This corresponds roughly to the case of Max $k$-LIN-$bmod; q$ with $k=q=2$. Each one of the extensions provides non-trivial technical challenges that we overcome in this work.
We give a deterministic, nearly logarithmic-space algorithm that given an undirected graph $G$, a positive integer $r$, and a set $S$ of vertices, approximates the conductance of $S$ in the $r$-step random walk on $G$ to within a factor of $1+epsilon$, where $epsilon>0$ is an arbitrarily small constant. More generally, our algorithm computes an $epsilon$-spectral approximation to the normalized Laplacian of the $r$-step walk. Our algorithm combines the derandomized square graph operation (Rozenman and Vadhan, 2005), which we recently used for solving Laplacian systems in nearly logarithmic space (Murtagh, Reingold, Sidford, and Vadhan, 2017), with ideas from (Cheng, Cheng, Liu, Peng, and Teng, 2015), which gave an algorithm that is time-efficient (while ours is space-efficient) and randomized (while ours is deterministic) for the case of even $r$ (while ours works for all $r$). Along the way, we provide some new results that generalize technical machinery and yield improvements over previous work. First, we obtain a nearly linear-time randomized algorithm for computing a spectral approximation to the normalized Laplacian for odd $r$. Second, we define and analyze a generalization of the derandomized square for irregular graphs and for sparsifying the product of two distinct graphs. As part of this generalization, we also give a strongly explicit construction of expander graphs of every size.
A Boolean constraint satisfaction problem (CSP), Max-CSP$(f)$, is a maximization problem specified by a constraint $f:{-1,1}^kto{0,1}$. An instance of the problem consists of $m$ constraint applications on $n$ Boolean variables, where each constraint application applies the constraint to $k$ literals chosen from the $n$ variables and their negations. The goal is to compute the maximum number of constraints that can be satisfied by a Boolean assignment to the $n$~variables. In the $(gamma,beta)$-approximation version of the problem for parameters $gamma geq beta in [0,1]$, the goal is to distinguish instances where at least $gamma$ fraction of the constraints can be satisfied from instances where at most $beta$ fraction of the constraints can be satisfied. In this work we consider the approximability of Max-CSP$(f)$ in the (dynamic) streaming setting, where constraints are inserted (and may also be deleted in the dynamic setting) one at a time. We completely characterize the approximability of all Boolean CSPs in the dynamic streaming setting. Specifically, given $f$, $gamma$ and $beta$ we show that either (1) the $(gamma,beta)$-approximation version of Max-CSP$(f)$ has a probabilistic dynamic streaming algorithm using $O(log n)$ space, or (2) for every $varepsilon > 0$ the $(gamma-varepsilon,beta+varepsilon)$-approximation version of Max-CSP$(f)$ requires $Omega(sqrt{n})$ space for probabilistic dynamic streaming algorithms. We also extend previously known results in the insertion-only setting to a wide variety of cases, and in particular the case of $k=2$ where we get a dichotomy and the case when the satisfying assignments of $f$ support a distribution on ${-1,1}^k$ with uniform marginals.
In this work we relate the deterministic complexity of factoring polynomials (over finite fields) to certain combinatorial objects we call m-schemes. We extend the known conditional deterministic subexponential time polynomial factoring algorithm for finite fields to get an underlying m-scheme. We demonstrate how the properties of m-schemes relate to improvements in the deterministic complexity of factoring polynomials over finite fields assuming the generalized Riemann Hypothesis (GRH). In particular, we give the first deterministic polynomial time algorithm (assuming GRH) to find a nontrivial factor of a polynomial of prime degree n where (n-1) is a smooth number.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا