Do you want to publish a course? Click here

Optimal quantum adversary lower bounds for ordered search

133   0   0.0 ( 0 )
 Added by Andrew M. Childs
 Publication date 2007
  fields Physics
and research's language is English




Ask ChatGPT about the research

The goal of the ordered search problem is to find a particular item in an ordered list of n items. Using the adversary method, Hoyer, Neerbek, and Shi proved a quantum lower bound for this problem of (1/pi) ln n + Theta(1). Here, we find the exact value of the best possible quantum adversary lower bound for a symmetrized version of ordered search (whose query complexity differs from that of the original problem by at most 1). Thus we show that the best lower bound for ordered search that can be proved by the adversary method is (1/pi) ln n + O(1). Furthermore, we show that this remains true for the generalized adversary method allowing negative weights.



rate research

Read More

123 - Harry Buhrman 1998
We prove lower bounds on the error probability of a quantum algorithm for searching through an unordered list of N items, as a function of the number T of queries it makes. In particular, if T=O(sqrt{N}) then the error is lower bounded by a constant. If we want error <1/2^N then we need T=Omega(N) queries. We apply this to show that a quantum computer cannot do much better than a classical computer when amplifying the success probability of an RP-machine. A classical computer can achieve error <=1/2^k using k applications of the RP-machine, a quantum computer still needs at least ck applications for this (when treating the machine as a black-box), where c>0 is a constant independent of k. Furthermore, we prove a lower bound of Omega(sqrt{log N}/loglog N) queries for quantum bounded-error search of an ordered list of N items.
123 - Peter Hoyer 2005
The quantum adversary method is a versatile method for proving lower bounds on quantum algorithms. It yields tight bounds for many computational problems, is robust in having many equivalent formulations, and has natural connections to classical lower bounds. A further nice property of the adversary method is that it behaves very well with respect to composition of functions. We generalize the adversary method to include costs--each bit of the input can be given an arbitrary positive cost representing the difficulty of querying that bit. We use this generalization to exactly capture the adversary bound of a composite function in terms of the adversary bounds of its component functions. Our results generalize and unify previously known composition properties of adversary methods, and yield as a simple corollary the Omega(sqrt{n}) bound of Barnum and Saks on the quantum query complexity of read-once functions.
We study the problem of emph{local search} on a graph. Given a real-valued black-box function f on the graphs vertices, this is the problem of determining a local minimum of f--a vertex v for which f(v) is no more than f evaluated at any of vs neighbors. In 1983, Aldous gave the first strong lower bounds for the problem, showing that any randomized algorithm requires $Omega(2^{n/2 - o(1)})$ queries to determine a local minima on the n-dimensional hypercube. The next major step forward was not until 2004 when Aaronson, introducing a new method for query complexity bounds, both strengthened this lower bound to $Omega(2^{n/2}/n^2)$ and gave an analogous lower bound on the quantum query complexity. While these bounds are very strong, they are known only for narrow families of graphs (hypercubes and grids). We show how to generalize Aaronsons techniques in order to give randomized (and quantum) lower bounds on the query complexity of local search for the family of vertex-transitive graphs. In particular, we show that for any vertex-transitive graph G of N vertices and diameter d, the randomized and quantum query complexities for local search on G are $Omega(N^{1/2}/dlog N)$ and $Omega(N^{1/4}/sqrt{dlog N})$, respectively.
Differentially private (DP) machine learning allows us to train models on private data while limiting data leakage. DP formalizes this data leakage through a cryptographic game, where an adversary must predict if a model was trained on a dataset D, or a dataset D that differs in just one example.If observing the training algorithm does not meaningfully increase the adversarys odds of successfully guessing which dataset the model was trained on, then the algorithm is said to be differentially private. Hence, the purpose of privacy analysis is to upper bound the probability that any adversary could successfully guess which dataset the model was trained on.In our paper, we instantiate this hypothetical adversary in order to establish lower bounds on the probability that this distinguishing game can be won. We use this adversary to evaluate the importance of the adversary capabilities allowed in the privacy analysis of DP training algorithms.For DP-SGD, the most common method for training neural networks with differential privacy, our lower bounds are tight and match the theoretical upper bound. This implies that in order to prove better upper bounds, it will be necessary to make use of additional assumptions. Fortunately, we find that our attacks are significantly weaker when additional (realistic)restrictions are put in place on the adversarys capabilities.Thus, in the practical setting common to many real-world deployments, there is a gap between our lower bounds and the upper bounds provided by the analysis: differential privacy is conservative and adversaries may not be able to leak as much information as suggested by the theoretical bound.
132 - Robert Spalek 2013
We prove a quantum query lower bound Omega(n^{(d+1)/(d+2)}) for the problem of deciding whether an input string of size n contains a k-tuple which belongs to a fixed orthogonal array on k factors of strength d<=k-1 and index 1, provided that the alphabet size is sufficiently large. Our lower bound is tight when d=k-1. The orthogonal array problem includes the following problems as special cases: k-sum problem with d=k-1, k-distinctness problem with d=1, k-pattern problem with d=0, (d-1)-degree problem with 1<=d<=k-1, unordered search with d=0 and k=1, and graph collision with d=0 and k=2.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا