Do you want to publish a course? Click here

Frequency of Correctness versus Average-Case Polynomial Time and Generalized Juntas

283   0   0.0 ( 0 )
 Publication date 2008
and research's language is English




Ask ChatGPT about the research

We prove that every distributional problem solvable in polynomial time on the average with respect to the uniform distribution has a frequently self-knowingly correct polynomial-time algorithm. We also study some features of probability weight of correctness with respect to generalizations of Procaccia and Rosenscheins junta distributions [PR07b].



rate research

Read More

We investigate issues related to two hard problems related to voting, the optimal weighted lobbying problem and the winner problem for Dodgson elections. Regarding the former, Christian et al. [CFRS06] showed that optimal lobbying is intractable in the sense of parameterized complexity. We provide an efficient greedy algorithm that achieves a logarithmic approximation ratio for this problem and even for a more general variant--optimal weighted lobbying. We prove that essentially no better approximation ratio than ours can be proven for this greedy algorithm. The problem of determining Dodgson winners is known to be complete for parallel access to NP [HHR97]. Homan and Hemaspaandra [HH06] proposed an efficient greedy heuristic for finding Dodgson winners with a guaranteed frequency of success, and their heuristic is a ``frequently self-knowingly correct algorithm. We prove that every distributional problem solvable in polynomial time on the average with respect to the uniform distribution has a frequently self-knowingly correct polynomial-time algorithm. Furthermore, we study some features of probability weight of correctness with respect to Procaccia and Rosenscheins junta distributions [PR07].
We identify a new notion of pseudorandomness for randomness sources, which we call the average bias. Given a distribution $Z$ over ${0,1}^n$, its average bias is: $b_{text{av}}(Z) =2^{-n} sum_{c in {0,1}^n} |mathbb{E}_{z sim Z}(-1)^{langle c, zrangle}|$. A source with average bias at most $2^{-k}$ has min-entropy at least $k$, and so low average bias is a stronger condition than high min-entropy. We observe that the inner product function is an extractor for any source with average bias less than $2^{-n/2}$. The notion of average bias especially makes sense for polynomial sources, i.e., distributions sampled by low-degree $n$-variate polynomials over $mathbb{F}_2$. For the well-studied case of affine sources, it is easy to see that min-entropy $k$ is exactly equivalent to average bias of $2^{-k}$. We show that for quadratic sources, min-entropy $k$ implies that the average bias is at most $2^{-Omega(sqrt{k})}$. We use this relation to design dispersers for separable quadratic sources with a min-entropy guarantee.
169 - Hugo Gimbert 2017
Recently Cristian S. Calude, Sanjay Jain, Bakhadyr Khoussainov, Wei Li and Frank Stephan proposed a quasi-polynomial time algorithm for parity games. This paper proposes a short proof of correctness of their algorithm.
In this work, we show the first worst-case to average-case reduction for the classical $k$-SUM problem. A $k$-SUM instance is a collection of $m$ integers, and the goal of the $k$-SUM problem is to find a subset of $k$ elements that sums to $0$. In the average-case version, the $m$ elements are chosen uniformly at random from some interval $[-u,u]$. We consider the total setting where $m$ is sufficiently large (with respect to $u$ and $k$), so that we are guaranteed (with high probability) that solutions must exist. Much of the appeal of $k$-SUM, in particular connections to problems in computational geometry, extends to the total setting. The best known algorithm in the average-case total setting is due to Wagner (following the approach of Blum-Kalai-Wasserman), and achieves a run-time of $u^{O(1/log k)}$. This beats the known (conditional) lower bounds for worst-case $k$-SUM, raising the natural question of whether it can be improved even further. However, in this work, we show a matching average-case lower-bound, by showing a reduction from worst-case lattice problems, thus introducing a new family of techniques into the field of fine-grained complexity. In particular, we show that any algorithm solving average-case $k$-SUM on $m$ elements in time $u^{o(1/log k)}$ will give a super-polynomial improvement in the complexity of algorithms for lattice problems.
Games on graphs provide a natural and powerful model for reactive systems. In this paper, we consider generalized reachability objectives, defined as conjunctions of reachability objectives. We first prove that deciding the winner in such games is $PSPACE$-complete, although it is fixed-parameter tractable with the number of reachability objectives as parameter. Moreover, we consider the memory requirements for both players and give matching upper and lower bounds on the size of winning strategies. In order to allow more efficient algorithms, we consider subclasses of generalized reachability games. We show that bounding the size of the reachability sets gives two natural subclasses where deciding the winner can be done efficiently.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا