ترغب بنشر مسار تعليمي؟ اضغط هنا

Approximate nearest neighbors search without false negatives for $l_2$ for $c>sqrt{loglog{n}}$

124   0   0.0 ( 0 )
 نشر من قبل Piotr Wygocki
 تاريخ النشر 2017
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In this paper, we report progress on answering the open problem presented by Pagh~[14], who considered the nearest neighbor search without false negatives for the Hamming distance. We show new data structures for solving the $c$-approximate nearest neighbors problem without false negatives for Euclidean high dimensional space $mathcal{R}^d$. These data structures work for any $c = omega(sqrt{log{log{n}}})$, where $n$ is the number of points in the input set, with poly-logarithmic query time and polynomial preprocessing time. This improves over the known algorithms, which require $c$ to be $Omega(sqrt{d})$. This improvement is obtained by applying a sequence of reductions, which are interesting on their own. First, we reduce the problem to $d$ instances of dimension logarithmic in $n$. Next, these instances are reduced to a number of $c$-approximate nearest neighbor search instances in $big(mathbb{R}^kbig)^L$ space equipped with metric $m(x,y) = max_{1 le i le L}(lVert x_i - y_irVert_2)$.



قيم البحث

اقرأ أيضاً

Persistence diagrams are important tools in the field of topological data analysis that describe the presence and magnitude of features in a filtered topological space. However, current approaches for comparing a persistence diagram to a set of other persistence diagrams is linear in the number of diagrams or do not offer performance guarantees. In this paper, we apply concepts from locality-sensitive hashing to support approximate nearest neighbor search in the space of persistence diagrams. Given a set $Gamma$ of $n$ $(M,m)$-bounded persistence diagrams, each with at most $m$ points, we snap-round the points of each diagram to points on a cubical lattice and produce a key for each possible snap-rounding. Specifically, we fix a grid over each diagram at several resolutions and consider the snap-roundings of each diagram to the four nearest lattice points. Then, we propose a data structure with $tau$ levels $mathbb{D}_{tau}$ that stores all snap-roundings of each persistence diagram in $Gamma$ at each resolution. This data structure has size $O(n5^mtau)$ to account for varying lattice resolutions as well as snap-roundings and the deletion of points with low persistence. To search for a persistence diagram, we compute a key for a query diagram by snapping each point to a lattice and deleting points of low persistence. Furthermore, as the lattice parameter decreases, searching our data structure yields a six-approximation of the nearest diagram in $Gamma$ in $O((mlog{n}+m^2)logtau)$ time and a constant factor approximation of the $k$th nearest diagram in $O((mlog{n}+m^2+k)logtau)$ time.
The celebrated Monte Carlo method estimates an expensive-to-compute quantity by random sampling. Bandit-based Monte Carlo optimization is a general technique for computing the minimum of many such expensive-to-compute quantities by adaptive random sa mpling. The technique converts an optimization problem into a statistical estimation problem which is then solved via multi-armed bandits. We apply this technique to solve the problem of high-dimensional $k$-nearest neighbors, developing an algorithm which we prove is able to identify exact nearest neighbors with high probability. We show that under regularity assumptions on a dataset of $n$ points in $d$-dimensional space, the complexity of our algorithm scales logarithmically with the dimension of the data as $Oleft((n+d)log^2 left(frac{nd}{delta}right)right)$ for error probability $delta$, rather than linearly as in exact computation requiring $O(nd)$. We corroborate our theoretical results with numerical simulations, showing that our algorithm outperforms both exact computation and state-of-the-art algorithms such as kGraph, NGT, and LSH on real datasets.
In the $(1+varepsilon,r)$-approximate near-neighbor problem for curves (ANNC) under some distance measure $delta$, the goal is to construct a data structure for a given set $mathcal{C}$ of curves that supports approximate near-neighbor queries: Given a query curve $Q$, if there exists a curve $Cinmathcal{C}$ such that $delta(Q,C)le r$, then return a curve $Cinmathcal{C}$ with $delta(Q,C)le(1+varepsilon)r$. There exists an efficient reduction from the $(1+varepsilon)$-approximate nearest-neighbor problem to ANNC, where in the former problem the answer to a query is a curve $Cinmathcal{C}$ with $delta(Q,C)le(1+varepsilon)cdotdelta(Q,C^*)$, where $C^*$ is the curve of $mathcal{C}$ closest to $Q$. Given a set $mathcal{C}$ of $n$ curves, each consisting of $m$ points in $d$ dimensions, we construct a data structure for ANNC that uses $ncdot O(frac{1}{varepsilon})^{md}$ storage space and has $O(md)$ query time (for a query curve of length $m$), where the similarity between two curves is their discrete Frechet or dynamic time warping distance. Our method is simple to implement, deterministic, and results in an exponential improvement in both query time and storage space compared to all previous bounds. Further, we also consider the asymmetric version of ANNC, where the length of the query curves is $k ll m$, and obtain essentially the same storage and query bounds as above, except that $m$ is replaced by $k$. Finally, we apply our method to a version of approximate range counting for curves and achieve similar bounds.
We present a new regular grid search algorithm for quick fixed-radius nearest-neighbor lookup developed in Python. This module indexes a set of k-dimensional points in a regular grid, with optional periodic conditions, providing a fast approach for n earest neighbors queries. In this first installment we provide three types of queries: $bubble$, $shell$ and the $nth-nearest$; as well as three different metrics of interest in astronomy: the $euclidean$ and two distance functions in spherical coordinates of varying precision, $haversine$ and $Vincenty$; and the possibility of providing a custom distance function. This package results particularly useful for large datasets where a brute-force search turns impractical.
We give an improved randomized CONGEST algorithm for distance-$2$ coloring that uses $Delta^2+1$ colors and runs in $O(log n)$ rounds, improving the recent $O(log Delta cdot log n)$-round algorithm in [Halldorsson, Kuhn, Maus; PODC 20]. We then impro ve the time complexity to $O(log Delta) + 2^{O(sqrt{loglog n})}$.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا