ترغب بنشر مسار تعليمي؟ اضغط هنا

Average Bias and Polynomial Sources

132   0   0.0 ( 0 )
 نشر من قبل Arnab Bhattacharyya
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We identify a new notion of pseudorandomness for randomness sources, which we call the average bias. Given a distribution $Z$ over ${0,1}^n$, its average bias is: $b_{text{av}}(Z) =2^{-n} sum_{c in {0,1}^n} |mathbb{E}_{z sim Z}(-1)^{langle c, zrangle}|$. A source with average bias at most $2^{-k}$ has min-entropy at least $k$, and so low average bias is a stronger condition than high min-entropy. We observe that the inner product function is an extractor for any source with average bias less than $2^{-n/2}$. The notion of average bias especially makes sense for polynomial sources, i.e., distributions sampled by low-degree $n$-variate polynomials over $mathbb{F}_2$. For the well-studied case of affine sources, it is easy to see that min-entropy $k$ is exactly equivalent to average bias of $2^{-k}$. We show that for quadratic sources, min-entropy $k$ implies that the average bias is at most $2^{-Omega(sqrt{k})}$. We use this relation to design dispersers for separable quadratic sources with a min-entropy guarantee.

قيم البحث

اقرأ أيضاً

We prove that every distributional problem solvable in polynomial time on the average with respect to the uniform distribution has a frequently self-knowingly correct polynomial-time algorithm. We also study some features of probability weight of cor rectness with respect to generalizations of Procaccia and Rosenscheins junta distributions [PR07b].
In analogy with the regularity lemma of Szemeredi, regularity lemmas for polynomials shown by Green and Tao (Contrib. Discrete Math. 2009) and by Kaufman and Lovett (FOCS 2008) modify a given collection of polynomials calF = {P_1,...,P_m} to a new co llection calF so that the polynomials in calF are pseudorandom. These lemmas have various applications, such as (special cases) of Reed-Muller testing and worst-case to average-case reductions for polynomials. However, the transformation from calF to calF is not algorithmic for either regularity lemma. We define new notions of regularity for polynomials, which are analogous to the above, but which allow for an efficient algorithm to compute the pseudorandom collection calF. In particular, when the field is of high characteristic, in polynomial time, we can refine calF into calF where every nonzero linear combination of polynomials in calF has desirably small Gowers norm. Using the algorithmic regularity lemmas, we show that if a polynomial P of degree d is within (normalized) Hamming distance 1-1/|F| -eps of some unknown polynomial of degree k over a prime field F (for k < d < |F|), then there is an efficient algorithm for finding a degree-k polynomial Q, which is within distance 1-1/|F| -eta of P, for some eta depending on eps. This can be thought of as decoding the Reed-Muller code of order k beyond the list decoding radius (finding one close codeword), when the received word P itself is a polynomial of degree d (with k < d < |F|). We also obtain an algorithmic version of the worst-case to average-case reductions by Kaufman and Lovett. They show that if a polynomial of degree d can be weakly approximated by a polynomial of lower degree, then it can be computed exactly using a collection of polynomials of degree at most d-1. We give an efficient (randomized) algorithm to find this collection.
Many papers in the field of integer linear programming (ILP, for short) are devoted to problems of the type $max{c^top x colon A x = b,, x in mathbb{Z}^n_{geq 0}}$, where all the entries of $A,b,c$ are integer, parameterized by the number of rows of $A$ and $|A|_{max}$. This class of problems is known under the name of ILP problems in the standard form, adding the word bounded if $x leq u$, for some integer vector $u$. Recently, many new sparsity, proximity, and complexity results were obtained for bounded and unbounded ILP problems in the standard form. In this paper, we consider ILP problems in the canonical form $$max{c^top x colon b_l leq A x leq b_r,, x in mathbb{Z}^n},$$ where $b_l$ and $b_r$ are integer vectors. We assume that the integer matrix $A$ has the rank $n$, $(n + m)$ rows, $n$ columns, and parameterize the problem by $m$ and $Delta(A)$, where $Delta(A)$ is the maximum of $n times n$ sub-determinants of $A$, taken in the absolute value. We show that any ILP problem in the standard form can be polynomially reduced to some ILP problem in the canonical form, preserving $m$ and $Delta(A)$, but the reverse reduction is not always possible. More precisely, we define the class of generalized ILP problems in the standard form, which includes an additional group constraint, and prove the equivalence to ILP problems in the canonical form. We generalize known sparsity, proximity, and complexity bounds for ILP problems in the canonical form. Additionally, sometimes, we strengthen previously known results for ILP problems in the canonical form, and, sometimes, we give shorter proofs. Finally, we consider the special cases of $m in {0,1}$. By this way, we give specialised sparsity, proximity, and complexity bounds for the problems on simplices, Knapsack problems and Subset-Sum problems.
The first-fit coloring is a heuristic that assigns to each vertex, arriving in a specified order $sigma$, the smallest available color. The problem Grundy Coloring asks how many colors are needed for the most adversarial vertex ordering $sigma$, i.e. , the maximum number of colors that the first-fit coloring requires over all possible vertex orderings. Since its inception by Grundy in 1939, Grundy Coloring has been examined for its structural and algorithmic aspects. A brute-force $f(k)n^{2^{k-1}}$-time algorithm for Grundy Coloring on general graphs is not difficult to obtain, where $k$ is the number of colors required by the most adversarial vertex ordering. It was asked several times whether the dependency on $k$ in the exponent of $n$ can be avoided or reduced, and its answer seemed elusive until now. We prove that Grundy Coloring is W[1]-hard and the brute-force algorithm is essentially optimal under the Exponential Time Hypothesis, thus settling this question by the negative. The key ingredient in our W[1]-hardness proof is to use so-called half-graphs as a building block to transmit a color from one vertex to another. Leveraging the half-graphs, we also prove that b-Chromatic Core is W[1]-hard, whose parameterized complexity was posed as an open question by Panolan et al. [JCSS 17]. A natural follow-up question is, how the parameterized complexity changes in the absence of (large) half-graphs. We establish fixed-parameter tractability on $K_{t,t}$-free graphs for b-Chromatic Core and Partial Grundy Coloring, making a step toward answering this question. The key combinatorial lemma underlying the tractability result might be of independent interest.
Given a clique-width $k$-expression of a graph $G$, we provide $2^{O(k)}cdot n$ time algorithms for connectivity constraints on locally checkable properties such as Node-Weighted Steiner Tree, Connected Dominating Set, or Connected Vertex Cover. We a lso propose a $2^{O(k)}cdot n$ time algorithm for Feedback Vertex Set. The best running times for all the considered cases were either $2^{O(kcdot log(k))}cdot n^{O(1)}$ or worse.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا