ترغب بنشر مسار تعليمي؟ اضغط هنا

Euclidean TSP, Motorcycle Graphs, and Other New Applications of Nearest-Neighbor Chains

210   0   0.0 ( 0 )
 نشر من قبل Nil Mamano
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We show new applications of the nearest-neighbor chain algorithm, a technique that originated in agglomerative hierarchical clustering. We apply it to a diverse class of geometric problems: we construct the greedy multi-fragment tour for Euclidean TSP in $O(nlog n)$ time in any fixed dimension and for Steiner TSP in planar graphs in $O(nsqrt{n}log n)$ time; we compute motorcycle graphs (which are a central part in straight skeleton algorithms) in $O(n^{4/3+varepsilon})$ time for any $varepsilon>0$; we introduce a narcissistic variant of the $k$-attribute stable matching model, and solve it in $O(n^{2-4/(k(1+varepsilon)+2)})$ time; we give a linear-time $2$-approximation for a 1D geometric set cover problem with applications to radio station placement.



قيم البحث

اقرأ أيضاً

We study two fundamental problems dealing with curves in the plane, namely, the nearest-neighbor problem and the center problem. Let $mathcal{C}$ be a set of $n$ polygonal curves, each of size $m$. In the nearest-neighbor problem, the goal is to cons truct a compact data structure over $mathcal{C}$, such that, given a query curve $Q$, one can efficiently find the curve in $mathcal{C}$ closest to $Q$. In the center problem, the goal is to find a curve $Q$, such that the maximum distance between $Q$ and the curves in $mathcal{C}$ is minimized. We use the well-known discrete Frechet distance function, both under~$L_infty$ and under $L_2$, to measure the distance between two curves. For the nearest-neighbor problem, despite discouraging previous results, we identify two important cases for which it is possible to obtain practical bounds, even when $m$ and $n$ are large. In these cases, either $Q$ is a line segment or $mathcal{C}$ consists of line segments, and the bounds on the size of the data structure and query time are nearly linear in the size of the input and query curve, respectively. The returned answer is either exact under $L_infty$, or approximated to within a factor of $1+varepsilon$ under~$L_2$. We also consider the variants in which the location of the input curves is only fixed up to translation, and obtain similar bounds, under $L_infty$. As for the center problem, we study the case where the center is a line segment, i.e., we seek the line segment that represents the given set as well as possible. We present near-linear time exact algorithms under $L_infty$, even when the location of the input curves is only fixed up to translation. Under $L_2$, we present a roughly $O(n^2m^3)$-time exact algorithm.
In the $(1+varepsilon,r)$-approximate near-neighbor problem for curves (ANNC) under some distance measure $delta$, the goal is to construct a data structure for a given set $mathcal{C}$ of curves that supports approximate near-neighbor queries: Given a query curve $Q$, if there exists a curve $Cinmathcal{C}$ such that $delta(Q,C)le r$, then return a curve $Cinmathcal{C}$ with $delta(Q,C)le(1+varepsilon)r$. There exists an efficient reduction from the $(1+varepsilon)$-approximate nearest-neighbor problem to ANNC, where in the former problem the answer to a query is a curve $Cinmathcal{C}$ with $delta(Q,C)le(1+varepsilon)cdotdelta(Q,C^*)$, where $C^*$ is the curve of $mathcal{C}$ closest to $Q$. Given a set $mathcal{C}$ of $n$ curves, each consisting of $m$ points in $d$ dimensions, we construct a data structure for ANNC that uses $ncdot O(frac{1}{varepsilon})^{md}$ storage space and has $O(md)$ query time (for a query curve of length $m$), where the similarity between two curves is their discrete Frechet or dynamic time warping distance. Our method is simple to implement, deterministic, and results in an exponential improvement in both query time and storage space compared to all previous bounds. Further, we also consider the asymmetric version of ANNC, where the length of the query curves is $k ll m$, and obtain essentially the same storage and query bounds as above, except that $m$ is replaced by $k$. Finally, we apply our method to a version of approximate range counting for curves and achieve similar bounds.
Given a set of point sites, a sona drawing is a single closed curve, disjoint from the sites and intersecting itself only in simple crossings, so that each bounded region of its complement contains exactly one of the sites. We prove that it is NP-har d to find a minimum-length sona drawing for $n$ given points, and that such a curve can be longer than the TSP tour of the same points by a factor $> 1.5487875$. When restricted to tours that lie on the edges of a square grid, with points in the grid cells, we prove that it is NP-hard even to decide whether such a tour exists. These results answer questions posed at CCCG 2006.
151 - Syed Ali Raza Zaidi 2021
In this paper, we present an overview of Nearest neighbor (NN) methods, which are frequently employed for solving classification problems using supervised learning. The article concisely introduces the theoretical background, algorithmic, and impleme ntation aspects along with the key applications. From an application standpoint, this article explores the challenges related to the 5G and beyond wireless networks which can be solved using NN classification techniques.
171 - Arka Banerjee , Tom Abel 2020
The use of summary statistics beyond the two-point correlation function to analyze the non-Gaussian clustering on small scales is an active field of research in cosmology. In this paper, we explore a set of new summary statistics -- the $k$-Nearest N eighbor Cumulative Distribution Functions ($k{rm NN}$-${rm CDF}$). This is the empirical cumulative distribution function of distances from a set of volume-filling, Poisson distributed random points to the $k$-nearest data points, and is sensitive to all connected $N$-point correlations in the data. The $k{rm NN}$-${rm CDF}$ can be used to measure counts in cell, void probability distributions and higher $N$-point correlation functions, all using the same formalism exploiting fast searches with spatial tree data structures. We demonstrate how it can be computed efficiently from various data sets - both discrete points, and the generalization for continuous fields. We use data from a large suite of $N$-body simulations to explore the sensitivity of this new statistic to various cosmological parameters, compared to the two-point correlation function, while using the same range of scales. We demonstrate that the use of $k{rm NN}$-${rm CDF}$ improves the constraints on the cosmological parameters by more than a factor of $2$ when applied to the clustering of dark matter in the range of scales between $10h^{-1}{rm Mpc}$ and $40h^{-1}{rm Mpc}$. We also show that relative improvement is even greater when applied on the same scales to the clustering of halos in the simulations at a fixed number density, both in real space, as well as in redshift space. Since the $k{rm NN}$-${rm CDF}$ are sensitive to all higher order connected correlation functions in the data, the gains over traditional two-point analyses are expected to grow as progressively smaller scales are included in the analysis of cosmological data.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا