ترغب بنشر مسار تعليمي؟ اضغط هنا

A New Minimax Theorem for Randomized Algorithms

126   0   0.0 ( 0 )
 نشر من قبل Shalev Ben-David
 تاريخ النشر 2020
والبحث باللغة English




اسأل ChatGPT حول البحث

The celebrated minimax principle of Yao (1977) says that for any Boolean-valued function $f$ with finite domain, there is a distribution $mu$ over the domain of $f$ such that computing $f$ to error $epsilon$ against inputs from $mu$ is just as hard as computing $f$ to error $epsilon$ on worst-case inputs. Notably, however, the distribution $mu$ depends on the target error level $epsilon$: the hard distribution which is tight for bounded error might be trivial to solve to small bias, and the hard distribution which is tight for a small bias level might be far from tight for bounded error levels. In this work, we introduce a new type of minimax theorem which can provide a hard distribution $mu$ that works for all bias levels at once. We show that this works for randomized query complexity, randomized communication complexity, some randomized circuit models, quantum query and communication complexities, approximate polynomial degree, and approximate logrank. We also prove an improved version of Impagliazzos hardcore lemma. Our proofs rely on two innovations over the classical approach of using Von Neumanns minimax theorem or linear programming duality. First, we use Sions minimax theorem to prove a minimax theorem for ratios of bilinear functions representing the cost and score of algorithms. Second, we introduce a new way to analyze low-bias randomized algorithms by viewing them as forecasting algorithms evaluated by a proper scoring rule. The expected score of the forecasting version of a randomized algorithm appears to be a more fine-grained way of analyzing the bias of the algorithm. We show that such expected scores have many elegant mathematical properties: for example, they can be amplified linearly instead of quadratically. We anticipate forecasting algorithms will find use in future work in which a fine-grained analysis of small-bias algorithms is required.

قيم البحث

اقرأ أيضاً

We prove two new results about the randomized query complexity of composed functions. First, we show that the randomized composition conjecture is false: there are families of partial Boolean functions $f$ and $g$ such that $R(fcirc g)ll R(f) R(g)$. In fact, we show that the left hand side can be polynomially smaller than the right hand side (though in our construction, both sides are polylogarithmic in the input size of $f$). Second, we show that for all $f$ and $g$, $R(fcirc g)=Omega(mathop{noisyR}(f)cdot R(g))$, where $mathop{noisyR}(f)$ is a measure describing the cost of computing $f$ on noisy oracle inputs. We show that this composition theorem is the strongest possible of its type: for any measure $M(cdot)$ satisfying $R(fcirc g)=Omega(M(f)R(g))$ for all $f$ and $g$, it must hold that $mathop{noisyR}(f)=Omega(M(f))$ for all $f$. We also give a clean characterization of the measure $mathop{noisyR}(f)$: it satisfies $mathop{noisyR}(f)=Theta(R(fcirc gapmaj_n)/R(gapmaj_n))$, where $n$ is the input size of $f$ and $gapmaj_n$ is the $sqrt{n}$-gap majority function on $n$ bits.
Let the randomized query complexity of a relation for error probability $epsilon$ be denoted by $R_epsilon(cdot)$. We prove that for any relation $f subseteq {0,1}^n times mathcal{R}$ and Boolean function $g:{0,1}^m rightarrow {0,1}$, $R_{1/3}(fcirc g^n) = Omega(R_{4/9}(f)cdot R_{1/2-1/n^4}(g))$, where $f circ g^n$ is the relation obtained by composing $f$ and $g$. We also show that $R_{1/3}left(f circ left(g^oplus_{O(log n)}right)^nright)=Omega(log n cdot R_{4/9}(f) cdot R_{1/3}(g))$, where $g^oplus_{O(log n)}$ is the function obtained by composing the xor function on $O(log n)$ bits and $g^t$.
Let $R_epsilon(cdot)$ stand for the bounded-error randomized query complexity with error $epsilon > 0$. For any relation $f subseteq {0,1}^n times S$ and partial Boolean function $g subseteq {0,1}^m times {0,1}$, we show that $R_{1/3}(f circ g^n) in Omega(R_{4/9}(f) cdot sqrt{R_{1/3}(g)})$, where $f circ g^n subseteq ({0,1}^m)^n times S$ is the composition of $f$ and $g$. We give an example of a relation $f$ and partial Boolean function $g$ for which this lower bound is tight. We prove our composition theorem by introducing a new complexity measure, the max conflict complexity $bar chi(g)$ of a partial Boolean function $g$. We show $bar chi(g) in Omega(sqrt{R_{1/3}(g)})$ for any (partial) function $g$ and $R_{1/3}(f circ g^n) in Omega(R_{4/9}(f) cdot bar chi(g))$; these two bounds imply our composition result. We further show that $bar chi(g)$ is always at least as large as the sabotage complexity of $g$, introduced by Ben-David and Kothari.
We prove a direct product theorem for the one-way entanglement-assisted quantum communication complexity of a general relation $fsubseteqmathcal{X}timesmathcal{Y}timesmathcal{Z}$. For any $varepsilon, zeta > 0$ and any $kgeq1$, we show that [ mathrm{ Q}^1_{1-(1-varepsilon)^{Omega(zeta^6k/log|mathcal{Z}|)}}(f^k) = Omegaleft(kleft(zeta^5cdotmathrm{Q}^1_{varepsilon + 12zeta}(f) - loglog(1/zeta)right)right),] where $mathrm{Q}^1_{varepsilon}(f)$ represents the one-way entanglement-assisted quantum communication complexity of $f$ with worst-case error $varepsilon$ and $f^k$ denotes $k$ parallel instances of $f$. As far as we are aware, this is the first direct product theorem for quantum communication. Our techniques are inspired by the parallel repetition theorems for the entangled value of two-player non-local games, under product distributions due to Jain, Pereszl{e}nyi and Yao, and under anchored distributions due to Bavarian, Vidick and Yuen, as well as message-compression for quantum protocols due to Jain, Radhakrishnan and Sen. Our techniques also work for entangled non-local games which have input distributions anchored on any one side. In particular, we show that for any game $G = (q, mathcal{X}timesmathcal{Y}, mathcal{A}timesmathcal{B}, mathsf{V})$ where $q$ is a distribution on $mathcal{X}timesmathcal{Y}$ anchored on any one side with anchoring probability $zeta$, then [ omega^*(G^k) = left(1 - (1-omega^*(G))^5right)^{Omegaleft(frac{zeta^2 k}{log(|mathcal{A}|cdot|mathcal{B}|)}right)}] where $omega^*(G)$ represents the entangled value of the game $G$. This is a generalization of the result of Bavarian, Vidick and Yuen, who proved a parallel repetition theorem for games anchored on both sides, and potentially a simplification of their proof.
We study the composition question for bounded-error randomized query complexity: Is R(f o g) = Omega(R(f) R(g)) for all Boolean functions f and g? We show that inserting a simple Boolean function h, whose query complexity is only Theta(log R(g)), in between f and g allows us to prove R(f o h o g) = Omega(R(f) R(h) R(g)). We prove this using a new lower bound measure for randomized query complexity we call randomized sabotage complexity, RS(f). Randomized sabotage complexity has several desirable properties, such as a perfect composition theorem, RS(f o g) >= RS(f) RS(g), and a composition theorem with randomized query complexity, R(f o g) = Omega(R(f) RS(g)). It is also a quadratically tight lower bound for total functions and can be quadratically superior to the partition bound, the best known general lower bound for randomized query complexity. Using this technique we also show implications for lifting theorems in communication complexity. We show that a general lifting theorem for zero-error randomized protocols implies a general lifting theorem for bounded-error protocols.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا