ترغب بنشر مسار تعليمي؟ اضغط هنا

Moment-Matching Polynomials

116   0   0.0 ( 0 )
 نشر من قبل Raghu Meka
 تاريخ النشر 2013
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We give a new framework for proving the existence of low-degree, polynomial approximators for Boolean functions with respect to broad classes of non-product distributions. Our proofs use techniques related to the classical moment problem and deviate significantly from known Fourier-based methods, which require the underlying distribution to have some product structure. Our main application is the first polynomial-time algorithm for agnostically learning any function of a constant number of halfspaces with respect to any log-concave distribution (for any constant accuracy parameter). This result was not known even for the case of learning the intersection of two halfspaces without noise. Additionally, we show that in the smoothed-analysis setting, the above results hold with respect to distributions that have sub-exponential tails, a property satisfied by many natural and well-studied distributions in machine learning. Given that our algorithms can be implemented using Support Vector Machines (SVMs) with a polynomial kernel, these results give a rigorous theoretical explanation as to why many kernel methods work so well in practice.

قيم البحث

اقرأ أيضاً

We study a combinatorial problem called Minimum Maximal Matching, where we are asked to find in a general graph the smallest that can not be extended. We show that this problem is hard to approximate with a constant smaller than 2, assuming the Uniqu e Games Conjecture. As a corollary we show, that Minimum Maximal Matching in bipartite graphs is hard to approximate with constant smaller than $frac{4}{3}$, with the same assumption. With a stronger variant of the Unique Games Conjecture --- that is Small Set Expansion Hypothesis --- we are able to improve the hardness result up to the factor of $frac{3}{2}$.
70 - Daniel Kane , Raghu Meka 2012
We give improved pseudorandom generators (PRGs) for Lipschitz functions of low-degree polynomials over the hypercube. These are functions of the form psi(P(x)), where P is a low-degree polynomial and psi is a function with small Lipschitz constant. P RGs for smooth functions of low-degree polynomials have received a lot of attention recently and play an important role in constructing PRGs for the natural class of polynomial threshold functions. In spite of the recent progress, no nontrivial PRGs were known for fooling Lipschitz functions of degree O(log n) polynomials even for constant error rate. In this work, we give the first such generator obtaining a seed-length of (log n)tilde{O}(d^2/eps^2) for fooling degree d polynomials with error eps. Previous generators had an exponential dependence on the degree. We use our PRG to get better integrality gap instances for sparsest cut, a fundamental problem in graph theory with many applications in graph optimization. We give an instance of uniform sparsest cut for which a powerful semi-definite relaxation (SDP) first introduced by Goemans and Linial and studied in the seminal work of Arora, Rao and Vazirani has an integrality gap of exp(Omega((log log n)^{1/2})). Understanding the performance of the Goemans-Linial SDP for uniform sparsest cut is an important open problem in approximation algorithms and metric embeddings and our work gives a near-exponential improvement over previous lower bounds which achieved a gap of Omega(log log n).
101 - Yihong Wu , Pengkun Yang 2015
We consider the problem of estimating the support size of a discrete distribution whose minimum non-zero mass is at least $ frac{1}{k}$. Under the independent sampling model, we show that the sample complexity, i.e., the minimal sample size to achiev e an additive error of $epsilon k$ with probability at least 0.1 is within universal constant factors of $ frac{k}{log k}log^2frac{1}{epsilon} $, which improves the state-of-the-art result of $ frac{k}{epsilon^2 log k} $ in cite{VV13}. Similar characterization of the minimax risk is also obtained. Our procedure is a linear estimator based on the Chebyshev polynomial and its approximation-theoretic properties, which can be evaluated in $O(n+log^2 k)$ time and attains the sample complexity within a factor of six asymptotically. The superiority of the proposed estimator in terms of accuracy, computational efficiency and scalability is demonstrated in a variety of synthetic and real datasets.
143 - Tomer Kotek 2011
This paper deals with the partition function of the Ising model from statistical mechanics, which is used to study phase transitions in physical systems. A special case of interest is that of the Ising model with constant energies and external field. One may consider such an Ising system as a simple graph together with vertex and edge weights. When these weights are considered indeterminates, the partition function for the constant case is a trivariate polynomial Z(G;x,y,z). This polynomial was studied with respect to its approximability by L. A. Goldberg, M. Jerrum and M. Paterson in 2003. Z(G;x,y,z) generalizes a bivariate polynomial Z(G;t,y), which was studied by D. Andr{e}n and K. Markstr{o}m in 2009. We consider the complexity of Z(G;t,y) and Z(G;x,y,z) in comparison to that of the Tutte polynomial, which is well-known to be closely related to the Potts model in the absence of an external field. We show that Z(G;x,y,z) is #P-hard to evaluate at all points in $mathbb{Q}^3$, except those in an exception set of low dimension, even when restricted to simple graphs which are bipartite and planar. A counting version of the Exponential Time Hypothesis, #ETH, was introduced by H. Dell, T. Husfeldt and M. Wahl{e}n in 2010 in order to study the complexity of the Tutte polynomial. In analogy to their results, we give a dichotomy theorem stating that evaluations of Z(G;t,y) either take exponential time in the number of vertices of $G$ to compute, or can be done in polynomial time. Finally, we give an algorithm for computing Z(G;x,y,z) in polynomial time on graphs of bounded clique-width, which is not known in the case of the Tutte polynomial.
Two polynomials $f, g in mathbb{F}[x_1, ldots, x_n]$ are called shift-equivalent if there exists a vector $(a_1, ldots, a_n) in mathbb{F}^n$ such that the polynomial identity $f(x_1+a_1, ldots, x_n+a_n) equiv g(x_1,ldots,x_n)$ holds. Our main result is a new randomized algorithm that tests whether two given polynomials are shift equivalent. Our algorithm runs in time polynomial in the circuit size of the polynomials, to which it is given black box access. This complements a previous work of Grigoriev (Theoretical Computer Science, 1997) who gave a deterministic algorithm running in time $n^{O(d)}$ for degree $d$ polynomials. Our algorithm uses randomness only to solve instances of the Polynomial Identity Testing (PIT) problem. Hence, if one could de-randomize PIT (a long-standing open problem in complexity) a de-randomization of our algorithm would follow. This establishes an equivalence between de-randomizing shift-equivalence testing and de-randomizing PIT (both in the black-box and the white-box setting). For certain restricted models, such as Read Once Branching Programs, we already obtain a deterministic algorithm using existing PIT results.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا