ترغب بنشر مسار تعليمي؟ اضغط هنا

The complexity of computation in bit streams

117   0   0.0 ( 0 )
 نشر من قبل Raphael Clifford
 تاريخ النشر 2015
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We revisit the complexity of online computation in the cell probe model. We consider a class of problems where we are first given a fixed pattern or vector $F$ of $n$ symbols and then one symbol arrives at a time in a stream. After each symbol has arrived we must output some function of $F$ and the $n$-length suffix of the arriving stream. Cell probe bounds of $Omega(deltalg{n}/w)$ have previously been shown for both convolution and Hamming distance in this setting, where $delta$ is the size of a symbol in bits and $winOmega(lg{n})$ is the cell size in bits. However, when $delta$ is a constant, as it is in many natural situations, these previous results no longer give us non-trivial bounds. We introduce a new lop-sided information transfer proof technique which enables us to prove meaningful lower bounds even for constant size input alphabets. We use our new framework to prove an amortised cell probe lower bound of $Omega(lg^2 n/(wcdot lg lg n))$ time per arriving bit for an online version of a well studied problem known as pattern matching with address errors. This is the first non-trivial cell probe lower bound for any online problem on bit streams that still holds when the cell sizes are large. We also show the same bound for online convolution conditioned on a new combinatorial conjecture related to Toeplitz matrices.



قيم البحث

اقرأ أيضاً

In the communication problem $mathbf{UR}$ (universal relation) [KRW95], Alice and Bob respectively receive $x, y in{0,1}^n$ with the promise that $x eq y$. The last player to receive a message must output an index $i$ such that $x_i eq y_i$. We prove that the randomized one-way communication complexity of this problem in the public coin model is exactly $Theta(min{n,log(1/delta)log^2(frac n{log(1/delta)})})$ for failure probability $delta$. Our lower bound holds even if promised $mathop{support}(y)subset mathop{support}(x)$. As a corollary, we obtain optimal lower bounds for $ell_p$-sampling in strict turnstile streams for $0le p < 2$, as well as for the problem of finding duplicates in a stream. Our lower bounds do not need to use large weights, and hold even if promised $xin{0,1}^n$ at all points in the stream. We give two different proofs of our main result. The first proof demonstrates that any algorithm $mathcal A$ solving sampling problems in turnstile streams in low memory can be used to encode subsets of $[n]$ of certain sizes into a number of bits below the information theoretic minimum. Our encoder makes adaptive queries to $mathcal A$ throughout its execution, but done carefully so as to not violate correctness. This is accomplished by injecting random noise into the encoders interactions with $mathcal A$, which is loosely motivated by techniques in differential privacy. Our second proof is via a novel randomized reduction from Augmented Indexing [MNSW98] which needs to interact with $mathcal A$ adaptively. To handle the adaptivity we identify certain likely interaction patterns and union bound over them to guarantee correct interaction on all of them. To guarantee correctness, it is important that the interaction hides some of its randomness from $mathcal A$ in the reduction.
Bobkov, Houdre, and the last author introduced a Poincare-type functional parameter, $lambda_infty$, of a graph $G$. They related $lambda_infty$ to the {em vertex expansion} of the graph via a Cheeger-type inequality, analogous to the inequality rela ting the spectral gap of the graph, $lambda_2$, to its {em edge expansion}. While $lambda_2$ can be computed efficiently, the computational complexity of $lambda_infty$ has remained an open question. Following the work of the second author with Raghavendra and Vempala, wherein the complexity of $lambda_infty$ was related to the so-called small-set expansion (SSE) problem, it has been believed that computing $lambda_infty$ is a hard problem. We confirm this conjecture by proving that computing $lambda_infty$ is indeed NP-hard, even for weighted trees. Our gadget further proves NP-hardness of computing emph{spread constant} of a weighted tree; i.e., a geometric measure of the graph, introduced by Alon, Boppana, and Spencer, in the context of deriving an asymptotic isoperimetric inequality of Cartesian products of graphs. We conclude this case by providing a fully polynomial time approximation scheme. We further study a generalization of spread constant in machine learning literature, namely the {em maximum variance embedding} problem. For trees, we provide fast combinatorial algorithms that avoid solving a semidefinite relaxation of the problem. On the other hand, for general graphs, we propose a randomized projection method that can outperform the optimal orthogonal projection, i.e., PCA, classically used for rounding of the optimum lifted solution (to SDP relaxation) of the problem.
To date, the only way to argue polynomial lower bounds for dynamic algorithms is via fine-grained complexity arguments. These arguments rely on strong assumptions about specific problems such as the Strong Exponential Time Hypothesis (SETH) and the O nline Matrix-Vector Multiplication Conjecture (OMv). While they have led to many exciting discoveries, dynamic algorithms still miss out some benefits and lessons from the traditional ``coarse-grained approach that relates together classes of problems such as P and NP. In this paper we initiate the study of coarse-grained complexity theory for dynamic algorithms. Below are among questions that this theory can answer. What if dynamic Orthogonal Vector (OV) is easy in the cell-probe model? A research program for proving polynomial unconditional lower bounds for dynamic OV in the cell-probe model is motivated by the fact that many conditional lower bounds can be shown via reductions from the dynamic OV problem. Since the cell-probe model is more powerful than word RAM and has historically allowed smaller upper bounds, it might turn out that dynamic OV is easy in the cell-probe model, making this research direction infeasible. Our theory implies that if this is the case, there will be very interesting algorithmic consequences: If dynamic OV can be maintained in polylogarithmic worst-case update time in the cell-probe model, then so are several important dynamic problems such as $k$-edge connectivity, $(1+epsilon)$-approximate mincut, $(1+epsilon)$-approximate matching, planar nearest neighbors, Chans subset union and 3-vs-4 diameter. The same conclusion can be made when we replace dynamic OV by, e.g., subgraph connectivity, single source reachability, Chans subset union, and 3-vs-4 diameter. Lower bounds for $k$-edge connectivity via dynamic OV? (see the full abstract in the pdf file).
In this paper we study the fine-grained complexity of finding exact and approximate solutions to problems in P. Our main contribution is showing reductions from exact to approximate solution for a host of such problems. As one (notable) example, we show that the Closest-LCS-Pair problem (Given two sets of strings $A$ and $B$, compute exactly the maximum $textsf{LCS}(a, b)$ with $(a, b) in A times B$) is equivalent to its approximation version (under near-linear time reductions, and with a constant approximation factor). More generally, we identify a class of problems, which we call BP-Pair-Class, comprising both exact and approximate solutions, and show that they are all equivalent under near-linear time reductions. Exploring this class and its properties, we also show: $bullet$ Under the NC-SETH assumption (a significantly more relaxed assumption than SETH), solving any of the problems in this class requires essentially quadratic time. $bullet$ Modest improvements on the running time of known algorithms (shaving log factors) would imply that NEXP is not in non-uniform $textsf{NC}^1$. $bullet$ Finally, we leverage our techniques to show new barriers for deterministic approximation algorithms for LCS. At the heart of these new results is a deep connection between interactive proof systems for bounded-space computations and the fine-grained complexity of exact and approximate solutions to problems in P. In particular, our results build on the proof techniques from the classical IP = PSPACE result.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا