ترغب بنشر مسار تعليمي؟ اضغط هنا

Revisionist Simulations: A New Approach to Proving Space Lower Bounds

61   0   0.0 ( 0 )
 نشر من قبل Leqi Zhu
 تاريخ النشر 2017
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Determining the space complexity of $x$-obstruction-free $k$-set agreement for $xleq k$ is an open problem. In $x$-obstruction-free protocols, processes are required to return in executions where at most $x$ processes take steps. The best known upper bound on the number of registers needed to solve this problem among $n>k$ processes is $n-k+x$ registers. No general lower bound better than $2$ was known. We prove that any $x$-obstruction-free protocol solving $k$-set agreement among $n>k$ processes uses at least $lfloor(n-x)/(k+1-x)rfloor+1$ registers. Our main tool is a simulation that serves as a reduction from the impossibility of deterministic wait-free $k$-set agreement: if a protocol uses fewer registers, then it is possible for $k+1$ processes to simulate the protocol and deterministically solve $k$-set agreement in a wait-free manner, which is impossible. A critical component of the simulation is the ability of simulating processes to revise the past of simulated processes. We introduce a new augmented snapshot object, which facilitates this. We also prove that any space lower bound on the number of registers used by obstruction-free protocols applies to protocols that satisfy nondeterministic solo termination. Hence, our lower bound of $lfloor(n-1)/krfloor+1$ for the obstruction-free ($x=1$) case also holds for randomized wait-free free protocols. In particular, this gives a tight lower bound of exactly $n$ registers for solving obstruction-free and randomized wait-free consensus. Finally, our new techniques can be applied to get a space lower of $lfloor n/2rfloor+1$ for $epsilon$-approximate agreement, for sufficiently small $epsilon$. It requires participating processes to return values within $epsilon$ of each other. The best known upper bounds are $lceillog(1/epsilon)rceil$ and $n$, while no general lower bounds were known.

قيم البحث

اقرأ أيضاً

Brand~ao and Svore very recently gave quantum algorithms for approximately solving semidefinite programs, which in some regimes are faster than the best-possible classical algorithms in terms of the dimension $n$ of the problem and the number $m$ of constraints, but worse in terms of various other parameters. In this paper we improve their algorithms in several ways, getting better dependence on those other parameters. To this end we develop new techniques for quantum algorithms, for instance a general way to efficiently implement smooth functions of sparse Hamiltonians, and a generalized minimum-finding procedure. We also show limits on this approach to quantum SDP-solvers, for instance for combinatorial optimizations problems that have a lot of symmetry. Finally, we prove some general lower bounds showing that in the worst case, the complexity of every quantum LP-solver (and hence also SDP-solver) has to scale linearly with $mn$ when $mapprox n$, which is the same as classical.
There are distributed graph algorithms for finding maximal matchings and maximal independent sets in $O(Delta + log^* n)$ communication rounds; here $n$ is the number of nodes and $Delta$ is the maximum degree. The lower bound by Linial (1987, 1992) shows that the dependency on $n$ is optimal: these problems cannot be solved in $o(log^* n)$ rounds even if $Delta = 2$. However, the dependency on $Delta$ is a long-standing open question, and there is currently an exponential gap between the upper and lower bounds. We prove that the upper bounds are tight. We show that maximal matchings and maximal independent sets cannot be found in $o(Delta + log log n / log log log n)$ rounds with any randomized algorithm in the LOCAL model of distributed computing. As a corollary, it follows that there is no deterministic algorithm for maximal matchings or maximal independent sets that runs in $o(Delta + log n / log log n)$ rounds; this is an improvement over prior lower bounds also as a function of $n$.
We study the problem of high-dimensional linear regression in a robust model where an $epsilon$-fraction of the samples can be adversarially corrupted. We focus on the fundamental setting where the covariates of the uncorrupted samples are drawn from a Gaussian distribution $mathcal{N}(0, Sigma)$ on $mathbb{R}^d$. We give nearly tight upper bounds and computational lower bounds for this problem. Specifically, our main contributions are as follows: For the case that the covariance matrix is known to be the identity, we give a sample near-optimal and computationally efficient algorithm that outputs a candidate hypothesis vector $widehat{beta}$ which approximates the unknown regression vector $beta$ within $ell_2$-norm $O(epsilon log(1/epsilon) sigma)$, where $sigma$ is the standard deviation of the random observation noise. An error of $Omega (epsilon sigma)$ is information-theoretically necessary, even with infinite sample size. Prior work gave an algorithm for this problem with sample complexity $tilde{Omega}(d^2/epsilon^2)$ whose error guarantee scales with the $ell_2$-norm of $beta$. For the case of unknown covariance, we show that we can efficiently achieve the same error guarantee as in the known covariance case using an additional $tilde{O}(d^2/epsilon^2)$ unlabeled examples. On the other hand, an error of $O(epsilon sigma)$ can be information-theoretically attained with $O(d/epsilon^2)$ samples. We prove a Statistical Query (SQ) lower bound providing evidence that this quadratic tradeoff in the sample size is inherent. More specifically, we show that any polynomial time SQ learning algorithm for robust linear regression (in Hubers contamination model) with estimation complexity $O(d^{2-c})$, where $c>0$ is an arbitrarily small constant, must incur an error of $Omega(sqrt{epsilon} sigma)$.
The emergence of systems with non-volatile main memory (NVM) increases the interest in the design of emph{recoverable concurrent objects} that are robust to crash-failures, since their operations are able to recover from such failures by using state retained in NVM. Of particular interest are recoverable algorithms that, in addition to ensuring object consistency, also provide emph{detectability}, a correctness condition requiring that the recovery code can infer if the failed operation was linearized or not and, in the former case, obtain its response. In this work, we investigate the space complexity of detectable algorithms and the external support they require. We make the following three contributions. First, we present the first wait-free bounded-space detectable read/write and CAS object implementations. Second, we prove that the bit complexity of every $N$-process obstruction-free detectable CAS implementation, assuming values from a domain of size at least $N$, is $Omega(N)$. Finally, we prove that the following holds for obstruction-free detectable implementations of a large class of objects: their recoverable operations must be provided with emph{auxiliary state} -- state that is not required by the non-recoverable counterpart implementation -- whose value must be provided from outside the operation, either by the system or by the caller of the operation. In contrast, this external support is, in general, not required if the recoverable algorithm is not detectable.
103 - Andris Ambainis 2005
We give a new version of the adversary method for proving lower bounds on quantum query algorithms. The new method is based on analyzing the eigenspace structure of the problem at hand. We use it to prove a new and optimal strong direct product theor em for 2-sided error quantum algorithms computing k independent instances of a symmetric Boolean function: if the algorithm uses significantly less than k times the number of queries needed for one instance of the function, then its success probability is exponentially small in k. We also use the polynomial method to prove a direct product theorem for 1-sided error algorithms for k threshold functions with a stronger bound on the success probability. Finally, we present a quantum algorithm for evaluating solutions to systems of linear inequalities, and use our direct product theorems to show that the time-space tradeoff of this algorithm is close to optimal.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا