ترغب بنشر مسار تعليمي؟ اضغط هنا

Tree-Independent Dual-Tree Algorithms

285   0   0.0 ( 0 )
 نشر من قبل Ryan Curtin
 تاريخ النشر 2013
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Dual-tree algorithms are a widely used class of branch-and-bound algorithms. Unfortunately, developing dual-tree algorithms for use with different trees and problems is often complex and burdensome. We introduce a four-part logical split: the tree, the traversal, the point-to-point base case, and the pruning rule. We provide a meta-algorithm which allows development of dual-tree algorithms in a tree-independent manner and easy extension to entirely new types of trees. Representations are provided for five common algorithms; for k-nearest neighbor search, this leads to a novel, tighter pruning bound. The meta-algorithm also allows straightforward extensions to massively parallel settings.



قيم البحث

اقرأ أيضاً

Robust optimization is a widely studied area in operations research, where the algorithm takes as input a range of values and outputs a single solution that performs well for the entire range. Specifically, a robust algorithm aims to minimize regret, defined as the maximum difference between the solutions cost and that of an optimal solution in hindsight once the input has been realized. For graph problems in P, such as shortest path and minimum spanning tree, robust polynomial-time algorithms that obtain a constant approximation on regret are known. In this paper, we study robust algorithms for minimizing regret in NP-hard graph optimization problems, and give constant approximations on regret for the classical traveling salesman and Steiner tree problems.
In the Priority Steiner Tree (PST) problem, we are given an undirected graph $G=(V,E)$ with a source $s in V$ and terminals $T subseteq V setminus {s}$, where each terminal $v in T$ requires a nonnegative priority $P(v)$. The goal is to compute a min imum weight Steiner tree containing edges of varying rates such that the path from $s$ to each terminal $v$ consists of edges of rate greater than or equal to $P(v)$. The PST problem with $k$ priorities admits a $min{2 ln |T| + 2, krho}$-approximation [Charikar et al., 2004], and is hard to approximate with ratio $c log log n$ for some constant $c$ [Chuzhoy et al., 2008]. In this paper, we first strengthen the analysis provided by [Charikar et al., 2004] for the $(2 ln |T| + 2)$-approximation to show an approximation ratio of $lceil log_2 |T| rceil + 1 le 1.443 ln |T| + 2$, then provide a very simple, parallelizable algorithm which achieves the same approximation ratio. We then consider a more difficult node-weighted version of the PST problem, and provide a $(2 ln |T|+2)$-approximation using extensions of the spider decomposition by [Klein & Ravi, 1995]. This is the first result for the PST problem in node-weighted graphs. Moreover, the approximation ratios for all above algorithms are tight.
125 - Ryan R. Curtin 2016
k-means is a widely used clustering algorithm, but for $k$ clusters and a dataset size of $N$, each iteration of Lloyds algorithm costs $O(kN)$ time. Although there are existing techniques to accelerate single Lloyd iterations, none of these are tail ored to the case of large $k$, which is increasingly common as dataset sizes grow. We propose a dual-tree algorithm that gives the exact same results as standard $k$-means; when using cover trees, we use adaptive analysis techniques to, under some assumptions, bound the single-iteration runtime of the algorithm as $O(N + k log k)$. To our knowledge these are the first sub-$O(kN)$ bounds for exact Lloyd iterations. We then show that this theoretically favorable algorithm performs competitively in practice, especially for large $N$ and $k$ in low dimensions. Further, the algorithm is tree-independent, so any type of tree may be used.
We study the classical NP-hard problems of finding maximum-size subsets from given sets of $k$ terminal pairs that can be routed via edge-disjoint paths (MaxEDP) or node-disjoint paths (MaxNDP) in a given graph. The approximability of MaxEDP/NDP is c urrently not well understood; the best known lower bound is $Omega(log^{1/2-epsilon}{n})$, assuming NP$~ otsubseteq~$ZPTIME$(n^{mathrm{poly}log n})$. This constitutes a significant gap to the best known approximation upper bound of $O(sqrt{n})$ due to Chekuri et al. (2006) and closing this gap is currently one of the big open problems in approximation algorithms. In their seminal paper, Raghavan and Thompson (Combinatorica, 1987) introduce the technique of randomized rounding for LPs; their technique gives an $O(1)$-approximation when edges (or nodes) may be used by $O(frac{log n}{loglog n})$ paths. In this paper, we strengthen the above fundamental results. We provide new bounds formulated in terms of the feedback vertex set number $r$ of a graph, which measures its vertex deletion distance to a forest. In particular, we obtain the following. * For MaxEDP, we give an $O(sqrt{r}cdot log^{1.5}{kr})$-approximation algorithm. As $rleq n$, up to logarithmic factors, our result strengthens the best known ratio $O(sqrt{n})$ due to Chekuri et al. * Further, we show how to route $Omega(mathrm{OPT})$ pairs with congestion $O(frac{log{kr}}{loglog{kr}})$, strengthening the bound obtained by the classic approach of Raghavan and Thompson. * For MaxNDP, we give an algorithm that gives the optimal answer in time $(k+r)^{O(r)}cdot n$. If $r$ is at most triple-exponential in $k$, this improves the best known algorithm for MaxNDP with parameter $k$, by Kawarabayashi and Wollan (STOC 2010). We complement these positive results by proving that MaxEDP is NP-hard even for $r=1$, and MaxNDP is W$[1]$-hard for parameter $r$.
The three-in-a-tree problem asks for an induced tree of the input graph containing three mandatory vertices. In 2006, Chudnovsky and Seymour [Combinatorica, 2010] presented the first polynomial time algorithm for this problem, which has become a crit ical subroutine in many algorithms for detecting induced subgraphs, such as beetles, pyramids, thetas, and even and odd-holes. In 2007, Derhy and Picouleau [Discrete Applied Mathematics, 2009] considered the natural generalization to $k$ mandatory vertices, proving that, when $k$ is part of the input, the problem is $mathsf{NP}$-complete, and ask what is the complexity of four-in-a-tree. Motivated by this question and the relevance of the original problem, we study the parameterized complexity of $k$-in-a-tree. We begin by showing that the problem is $mathsf{W[1]}$-hard when jointly parameterized by the size of the solution and minimum clique cover and, under the Exponential Time Hypothesis, does not admit an $n^{o(k)}$ time algorithm. Afterwards, we use Courcelles Theorem to prove fixed-parameter tractability under cliquewidth, which prompts our investigation into which parameterizations admit single exponential algorithms; we show that such algorithms exist for the unrelated parameterizations treewidth, distance to cluster, and distance to co-cluster. In terms of kernelization, we present a linear kernel under feedback edge set, and show that no polynomial kernel exists under vertex cover nor distance to clique unless $mathsf{NP} subseteq mathsf{coNP}/mathsf{poly}$. Along with other remarks and previous work, our tractability and kernelization results cover many of the most commonly employed parameters in the graph parameter hierarchy.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا