Do you want to publish a course? Click here

Decision trees are PAC-learnable from most product distributions: a smoothed analysis

162   0   0.0 ( 0 )
 Added by Adam Kalai
 Publication date 2008
and research's language is English




Ask ChatGPT about the research

We consider the problem of PAC-learning decision trees, i.e., learning a decision tree over the n-dimensional hypercube from independent random labeled examples. Despite significant effort, no polynomial-time algorithm is known for learning polynomial-sized decision trees (even trees of any super-constant size), even when examples are assumed to be drawn from the uniform distribution on {0,1}^n. We give an algorithm that learns arbitrary polynomial-sized decision trees for {em most product distributions}. In particular, consider a random product distribution where the bias of each bit is chosen independently and uniformly from, say, [.49,.51]. Then with high probability over the parameters of the product distribution and the random examples drawn from it, the algorithm will learn any tree. More generally, in the spirit of smoothed analysis, we consider an arbitrary product distribution whose parameters are specified only up to a [-c,c] accuracy (perturbation), for an arbitrarily small positive constant c.



rate research

Read More

In recent years, there are many attempts to understand popular heuristics. An example of such a heuristic algorithm is the ID3 algorithm for learning decision trees. This algorithm is commonly used in practice, but there are very few theoretical works studying its behavior. In this paper, we analyze the ID3 algorithm, when the target function is a $k$-Junta, a function that depends on $k$ out of $n$ variables of the input. We prove that when $k = log n$, the ID3 algorithm learns in polynomial time $k$-Juntas, in the smoothed analysis model of Kalai & Teng. That is, we show a learnability result when the observed distribution is a noisy variant of the original distribution.
PAC-learning usually aims to compute a small subset ($varepsilon$-sample/net) from $n$ items, that provably approximates a given loss function for every query (model, classifier, hypothesis) from a given set of queries, up to an additive error $varepsilonin(0,1)$. Coresets generalize this idea to support multiplicative error $1pmvarepsilon$. Inspired by smoothed analysis, we suggest a natural generalization: approximate the emph{average} (instead of the worst-case) error over the queries, in the hope of getting smaller subsets. The dependency between errors of different queries implies that we may no longer apply the Chernoff-Hoeffding inequality for a fixed query, and then use the VC-dimension or union bound. This paper provides deterministic and randomized algorithms for computing such coresets and $varepsilon$-samples of size independent of $n$, for any finite set of queries and loss function. Example applications include new and improved coreset constructions for e.g. streaming vector summarization [ICML17] and $k$-PCA [NIPS16]. Experimental results with open source code are provided.
We study sublinear and local computation algorithms for decision trees, focusing on testing and reconstruction. Our first result is a tester that runs in $mathrm{poly}(log s, 1/varepsilon)cdot nlog n$ time, makes $mathrm{poly}(log s,1/varepsilon)cdot log n$ queries to an unknown function $f$, and: $circ$ Accepts if $f$ is $varepsilon$-close to a size-$s$ decision tree; $circ$ Rejects if $f$ is $Omega(varepsilon)$-far from decision trees of size $s^{tilde{O}((log s)^2/varepsilon^2)}$. Existing testers distinguish size-$s$ decision trees from those that are $varepsilon$-far from from size-$s$ decision trees in $mathrm{poly}(s^s,1/varepsilon)cdot n$ time with $tilde{O}(s/varepsilon)$ queries. We therefore solve an incomparable problem, but achieve doubly-exponential-in-$s$ and exponential-in-$s$ improvements in time and query complexities respectively. We obtain our tester by designing a reconstruction algorithm for decision trees: given query access to a function $f$ that is close to a small decision tree, this algorithm provides fast query access to a small decision tree that is close to $f$. By known relationships, our results yield reconstruction algorithms for numerous other boolean function properties -- Fourier degree, randomized and quantum query complexities, certificate complexity, sensitivity, etc. -- which in turn yield new testers for these properties. Finally, we give a hardness result for testing whether an unknown function is $varepsilon$-close-to or $Omega(varepsilon)$-far-from size-$s$ decision trees. We show that an efficient algorithm for this task would yield an efficient algorithm for properly learning decision trees, a central open problem of learning theory. It has long been known that proper learning algorithms for any class $mathcal{H}$ yield property testers for $mathcal{H}$; this provides an example of a converse.
We give a quasipolynomial-time algorithm for learning stochastic decision trees that is optimally resilient to adversarial noise. Given an $eta$-corrupted set of uniform random samples labeled by a size-$s$ stochastic decision tree, our algorithm runs in time $n^{O(log(s/varepsilon)/varepsilon^2)}$ and returns a hypothesis with error within an additive $2eta + varepsilon$ of the Bayes optimal. An additive $2eta$ is the information-theoretic minimum. Previously no non-trivial algorithm with a guarantee of $O(eta) + varepsilon$ was known, even for weaker noise models. Our algorithm is furthermore proper, returning a hypothesis that is itself a decision tree; previously no such algorithm was known even in the noiseless setting.
We give an $n^{O(loglog n)}$-time membership query algorithm for properly and agnostically learning decision trees under the uniform distribution over ${pm 1}^n$. Even in the realizable setting, the previous fastest runtime was $n^{O(log n)}$, a consequence of a classic algorithm of Ehrenfeucht and Haussler. Our algorithm shares similarities with practical heuristics for learning decision trees, which we augment with additional ideas to circumvent known lower bounds against these heuristics. To analyze our algorithm, we prove a new structural result for decision trees that strengthens a theorem of ODonnell, Saks, Schramm, and Servedio. While the OSSS theorem says that every decision tree has an influential variable, we show how every decision tree can be pruned so that every variable in the resulting tree is influential.

suggested questions

comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا