ﻻ يوجد ملخص باللغة العربية
In analogy with the regularity lemma of Szemeredi, regularity lemmas for polynomials shown by Green and Tao (Contrib. Discrete Math. 2009) and by Kaufman and Lovett (FOCS 2008) modify a given collection of polynomials calF = {P_1,...,P_m} to a new collection calF so that the polynomials in calF are pseudorandom. These lemmas have various applications, such as (special cases) of Reed-Muller testing and worst-case to average-case reductions for polynomials. However, the transformation from calF to calF is not algorithmic for either regularity lemma. We define new notions of regularity for polynomials, which are analogous to the above, but which allow for an efficient algorithm to compute the pseudorandom collection calF. In particular, when the field is of high characteristic, in polynomial time, we can refine calF into calF where every nonzero linear combination of polynomials in calF has desirably small Gowers norm. Using the algorithmic regularity lemmas, we show that if a polynomial P of degree d is within (normalized) Hamming distance 1-1/|F| -eps of some unknown polynomial of degree k over a prime field F (for k < d < |F|), then there is an efficient algorithm for finding a degree-k polynomial Q, which is within distance 1-1/|F| -eta of P, for some eta depending on eps. This can be thought of as decoding the Reed-Muller code of order k beyond the list decoding radius (finding one close codeword), when the received word P itself is a polynomial of degree d (with k < d < |F|). We also obtain an algorithmic version of the worst-case to average-case reductions by Kaufman and Lovett. They show that if a polynomial of degree d can be weakly approximated by a polynomial of lower degree, then it can be computed exactly using a collection of polynomials of degree at most d-1. We give an efficient (randomized) algorithm to find this collection.
In this paper I present several novel, efficient, algorithmic techniques for solving some multidimensional geometric data management and analysis problems. The techniques are based on several data structures from computational geometry (e.g. segment
Given a clique-width $k$-expression of a graph $G$, we provide $2^{O(k)}cdot n$ time algorithms for connectivity constraints on locally checkable properties such as Node-Weighted Steiner Tree, Connected Dominating Set, or Connected Vertex Cover. We a
We identify a new notion of pseudorandomness for randomness sources, which we call the average bias. Given a distribution $Z$ over ${0,1}^n$, its average bias is: $b_{text{av}}(Z) =2^{-n} sum_{c in {0,1}^n} |mathbb{E}_{z sim Z}(-1)^{langle c, zrangle
This paper was removed due to an error in the proof (Claim 4.12 as stated is not true). The authors would like to thank Ilya Volkovich for pointing out a counterexample to this papers main result in positive characteristic: If $F$ is a field with pri
Many papers in the field of integer linear programming (ILP, for short) are devoted to problems of the type $max{c^top x colon A x = b,, x in mathbb{Z}^n_{geq 0}}$, where all the entries of $A,b,c$ are integer, parameterized by the number of rows of