Do you want to publish a course? Click here

From Quantum Query Complexity to State Complexity

293   0   0.0 ( 0 )
 Added by Shenggen Zheng
 Publication date 2014
and research's language is English




Ask ChatGPT about the research

State complexity of quantum finite automata is one of the interesting topics in studying the power of quantum finite automata. It is therefore of importance to develop general methods how to show state succinctness results for quantum finite automata. One such method is presented and demonstrated in this paper. In particular, we show that state succinctness results can be derived out of query complexity results.



rate research

Read More

We study the query complexity of quantum learning problems in which the oracles form a group $G$ of unitary matrices. In the simplest case, one wishes to identify the oracle, and we find a description of the optimal success probability of a $t$-query quantum algorithm in terms of group characters. As an application, we show that $Omega(n)$ queries are required to identify a random permutation in $S_n$. More generally, suppose $H$ is a fixed subgroup of the group $G$ of oracles, and given access to an oracle sampled uniformly from $G$, we want to learn which coset of $H$ the oracle belongs to. We call this problem coset identification and it generalizes a number of well-known quantum algorithms including the Bernstein-Vazirani problem, the van Dam problem and finite field polynomial interpolation. We provide character-theoretic formulas for the optimal success probability achieved by a $t$-query algorithm for this problem. One application involves the Heisenberg group and provides a family of problems depending on $n$ which require $n+1$ queries classically and only $1$ query quantumly.
In 1986, Saks and Wigderson conjectured that the largest separation between deterministic and zero-error randomized query complexity for a total boolean function is given by the function $f$ on $n=2^k$ bits defined by a complete binary tree of NAND gates of depth $k$, which achieves $R_0(f) = O(D(f)^{0.7537ldots})$. We show this is false by giving an example of a total boolean function $f$ on $n$ bits whose deterministic query complexity is $Omega(n/log(n))$ while its zero-error randomized query complexity is $tilde O(sqrt{n})$. We further show that the quantum query complexity of the same function is $tilde O(n^{1/4})$, giving the first example of a total function with a super-quadratic gap between its quantum and deterministic query complexities. We also construct a total boolean function $g$ on $n$ variables that has zero-error randomized query complexity $Omega(n/log(n))$ and bounded-error randomized query complexity $R(g) = tilde O(sqrt{n})$. This is the first super-linear separation between these two complexity measures. The exact quantum query complexity of the same function is $Q_E(g) = tilde O(sqrt{n})$. These two functions show that the relations $D(f) = O(R_1(f)^2)$ and $R_0(f) = tilde O(R(f)^2)$ are optimal, up to poly-logarithmic factors. Further variations of these functions give additional separations between other query complexity measures: a cubic separation between $Q$ and $R_0$, a $3/2$-power separation between $Q_E$ and $R$, and a 4th power separation between approximate degree and bounded-error randomized query complexity. All of these examples are variants of a function recently introduced by goos, Pitassi, and Watson which they used to separate the unambiguous 1-certificate complexity from deterministic query complexity and to resolve the famous Clique versus Independent Set problem in communication complexity.
We study the composition question for bounded-error randomized query complexity: Is R(f o g) = Omega(R(f) R(g)) for all Boolean functions f and g? We show that inserting a simple Boolean function h, whose query complexity is only Theta(log R(g)), in between f and g allows us to prove R(f o h o g) = Omega(R(f) R(h) R(g)). We prove this using a new lower bound measure for randomized query complexity we call randomized sabotage complexity, RS(f). Randomized sabotage complexity has several desirable properties, such as a perfect composition theorem, RS(f o g) >= RS(f) RS(g), and a composition theorem with randomized query complexity, R(f o g) = Omega(R(f) RS(g)). It is also a quadratically tight lower bound for total functions and can be quadratically superior to the partition bound, the best known general lower bound for randomized query complexity. Using this technique we also show implications for lifting theorems in communication complexity. We show that a general lifting theorem for zero-error randomized protocols implies a general lifting theorem for bounded-error protocols.
61 - Robin Kothari 2015
We show a nearly quadratic separation between deterministic communication complexity and the logarithm of the partition number, which is essentially optimal. This improves upon a recent power 1.5 separation of Goos, Pitassi, and Watson (FOCS 2015). In query complexity, we establish a nearly quadratic separation between deterministic (and even randomized) query complexity and subcube partition complexity, which is also essentially optimal. We also establish a nearly power 1.5 separation between quantum query complexity and subcube partition complexity, the first superlinear separation between the two measures. Lastly, we show a quadratic separation between quantum query complexity and one-sided subcube partition complexity. Our query complexity separations use the recent cheat sheet framework of Aaronson, Ben-David, and the author. Our query functions are built up in stages by alternating function composition with the cheat sheet construction. The communication complexity separation follows from lifting the query separation to communication complexity.
We prove two new results about the randomized query complexity of composed functions. First, we show that the randomized composition conjecture is false: there are families of partial Boolean functions $f$ and $g$ such that $R(fcirc g)ll R(f) R(g)$. In fact, we show that the left hand side can be polynomially smaller than the right hand side (though in our construction, both sides are polylogarithmic in the input size of $f$). Second, we show that for all $f$ and $g$, $R(fcirc g)=Omega(mathop{noisyR}(f)cdot R(g))$, where $mathop{noisyR}(f)$ is a measure describing the cost of computing $f$ on noisy oracle inputs. We show that this composition theorem is the strongest possible of its type: for any measure $M(cdot)$ satisfying $R(fcirc g)=Omega(M(f)R(g))$ for all $f$ and $g$, it must hold that $mathop{noisyR}(f)=Omega(M(f))$ for all $f$. We also give a clean characterization of the measure $mathop{noisyR}(f)$: it satisfies $mathop{noisyR}(f)=Theta(R(fcirc gapmaj_n)/R(gapmaj_n))$, where $n$ is the input size of $f$ and $gapmaj_n$ is the $sqrt{n}$-gap majority function on $n$ bits.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا